Agents

Responsibility, Obligation and Trust

Software representing human users will necessarily pose (or at least potentially pose) many of the same threats in their environments as their human users. They may attempt to access, transfer, delete or alter information that they should not. They may deny access to legitimate users by disrupting normal channels of communication or by absorbing too many resources themselves. They may mislead or misinform other users and other agents [1].

All of these problems may be mirrored in the relationship between an agent and the user it represents. That is, a faulty or deliberately misconstructed agent may pose all of the above threats not only to other entities and systems but also to the user it is supposed to be representing.

Ordinary software also poses these threats: A wordprocessor with a bug may delete an important document. A program may malfunction to clog a network with spurious transmissions. A spreadsheet may miscalculate a cell and the resulting incorrect information may cause erroneous decisions to be made. Thus the general issues of software error and deliberate criminal action using software are not new.

Agents however add a new dimension. The user of an agent has delegated some responsibility in some area to that agent. Thus if the agent misbehaves, it is as if the user him- or herself has misbehaved. This is a larger issue than the misuse of a passive tool: A sword, for example, is in the direct control of the person wielding it and the responsibility for damage inflicted is clear and direct [2].

But what of a stockbroker, for example? Who is responsible if his or her trading results in damage? Is there a difference between the damage caused to a buyer (whose broker has paid too much for low-value stocks) and damage caused to a seller (whose broker has sold stocks for too low a price)?

It is unlikely except in the grossest cases of incompetence that this could be determined in a black-and-white way. The instructions given would have to be inspected carefully, and considered while taking into account the nature of the market, the capacities of the broker, the intent of the client and a host of other minutae - and in the end, any answer would remain a matter of judgement, not hard and fast fact.

Most discussion of agent responsibility in technically oriented papers is strongly concerned with defence. That is, it is concerned with preventing external agents from causing damage to a resource or hosting environment. Typically, the same strategies can be equally appropriately applied by the agent's client as by others with which (or with whom) the agent interacts.

Although the statement is made with reference only to financial liability, the point made by Harrison et al. (1995, p.4) about agent responsibility extends easily:

Clearly this is equally true of the other party, who will want to limit credit extended, for example, or resources consumed.

A typical technical approach is that taken by Thirunavukkarasu et al. (1995) in a proposed security architecture for KQML. The issues they identify (p.2) are authentication, preservation of message integrity, protection of privacy, detection of message duplication or replay, non-repudiation of messages and the prevention of message hijacking (this last should more properly be termed prevention of authentication hijacking).

While these issues are indeed important, they are not significantly different in an agent context than in any other context. Other issues, rather more encompassing than information integrity, arise in agent-based systems.

Responsibility

"Responsibility" is an intentional description of the nature of a delegated task. If one agent requests another to do something, and the other accepts the task, then the requestee has become responsible for the completion of that task [3].

If an agent is not capable of carrying out a task, clearly issues of responsibility do not apply to that task. Most humans cannot fly unaided - it does not make sense either to demand of a person that he or she fly, or to state that he or she has an obligation not to fly. In Goodwin's terms these are inconsistent and inevitable tasks respectively (1993, pp.44-45).

Responsibility only becomes a meaningful term in a context where, given a goal and an ability, an agent can choose whether or not to exercise that ability in moving towards the goal. The concept of permission that we are used to in the computing field [4] is thus only an aspect of responsibility in the sense that agents with the power to prevent access to resources might choose whether or not to do so. From the point of view of an agent seeking access, responsibility is not an issue [5].

In short, responsibility is intimately tied up with choice.

Responsibility has another meaning also - that of doing a task in a way which is optimally appropriate, possibly in contexts far removed from the specifics of the task itself. To "act responsibly" an agent must operate with due regard for those other contexts. For example, perhaps a task has a heavy resource requirement. Even if sufficient resources are available, perhaps some should be left in case other processes need them. An agent which takes account of such matters is being responsible in this other sense.

Obligation

Related to responsibility is the concept of "obligation". White (1995, pp.8-10) describes how limits are set on the actions a Telescript agent can perform using the concept of an authority. This is the agent equivalent of permissions [4].

However, the authority to carry out an action is only one side of the coin. Responsibility may carry with it an obligation to carry out certain actions.

Consider the situation where an agent has been asked to negotiate for a product, the agent has sufficient credit to transact business, the seller has the required product - but the agent fails to purchase the goods. This is where the notion of obligation comes in. Was the agent obliged to purchase the product? If so, why? Under what conditions might an agent reasonably decide not to carry out its task?

Obligations may exist between agents; certainly they exist between agents and their users. In one sense, obligation is an intentional description (Wooldridge & Jennings, 1995, p.8) of goals, of the complex decision making process by which an agent chooses one course of action over another. In human interaction, the concept of obligation carries with it the implication of conflict and of conflict resolution. Similarly with agents there may be a conflict of goals in the execution of a task, and obligation is a useful term to use to classify the conflicting goals.

There are some implied obligations for any agent, which may seem obvious and trivial, such as the obligation to finish the task in a reasonable time and the obligation to minimise the resources required to complete the task.

Krogh (1995, p.3) makes a distinction between necessity and obligation, describing necessity as "what must be the case" and obligations as "what ought ideally to be the case". Necessities (constraints) in this sense are familiar and certainly not unique to agents. Any disk operating system will have similar trade-offs to make. However, the distinction Krogh makes must itself be approached with caution, because the real issue is again one of choice. Matters about which the agent has no choice are by definition constraints.

Limited resources are only constraints if the agent has no mechanisms to deal with their lack. In that case, lack of a given resource removes all choice from the agent with respect to the desired action. If the agent is able to modify its behaviour to deal with the lack, then the limited resource becomes just another problem to be surmounted.

Obligation is more properly used in situations of voluntary dependence; where agents contract with one another (or with human users) to carry out particular tasks.

Before accepting an obligation, an agent must decide whether or not to do so. Presumably the decision to accept an obligation will be taken as part of a wider effort to reach a particular goal. The agent (or human user) seeking to impose the obligation is also doing so as part of a larger goal. The acts of imposing and accepting obligations are thus complementary. Refusal to accept an obligation is itself a decision that is informed by the goals the agent has and refusal may have its cost as well (see the discussion of cooperation below).

Jennings and Wooldridge (1995, p.10) pont out that in a system composed of automous agents, problems such as deadlock and starvation may arise. This is taken further by Krogh (1995, p.9) who makes a very clear case (essentially a reductio ad absurdum of computational cost) that obligations in a multiagent environment cannot be constrained to prevent conflicts. His point is simply that to avoid such a situation either perfect knowledge of all extant obligations must be available to all participating agents or some mechanism must be available for agents to check for conflicts before accepting a new obligation. This would involve either a combinatorially explosive messaging scheme or a centralised registry of obligations which would itself suffer a combinatorial explosion in attempting to determine possible conflicts.

Jennings (1992) discusses obligation within multi-agent systems, specifically in areas where overall outcomes are created by teams of cooperating agents. In that context, he notes that joint action might be considered when "no individual is capable of achieving a desired objective alone" (p.1). He goes on to discuss what happens when a jointly desired objective ceases to be desired by one or more of the cooperating agents.

Trust

From the above, clearly there will be times when an agent does not do what might be expected of it (leaving aside deterministic arguments). In a system containing agents with possibly conflicting responsibilities and obligations, not to mention agents that may be defective, poorly trained (in the case of learning agents) or carrying out the instructions of malicious persons, how do people (or for that matter other agents) decide which agents to trust?

White (1995, p.8) says "[Telescript] agents and places can discern but neither withhold nor falsify their authorities. Anonymity is precluded." This is an admirable quality, but as we have seen in the discussion of obligation above, identity and authority is not sufficient to preclude defection [6]. Defection might occur not through malice but as a reaction to quite different pressures, such as a greater obligation elsewhere.

The very complexity that leads us to distinguish agents from other forms of software also prevents any certainty as to the motives and capabilities of the agents we may encounter. As Jennings and Wooldridge (1995, p.10) put it:

For agents in many contexts the decisions between cooperation and defection, will be a matters of great importance. Much as a credit rating or a criminal record affects a person's abilities and opportunities in our social context, the degree to which many agents can be effective will be affected by the extent to which they are trusted by the other entities in their environment.

Krogh (1995, p.2) gives an example where defection for short-term gain results in a penalty being imposed against the defector by the other party, restricting the defector's access to information held by the other party. Similar scenarios are not difficult to construct. It seems likely that this kind of "reward and punishment" approach would be a fruitful and appropriate one for many situations [7].

However, risk management will be closely allied to any such approach. If defection by an agent would cause too much loss or damage, defection must be made correspondingly less likely. As with any other matter, the cost of protection must be weighed against the cost of defection.


[1] Two summaries of the kinds of threats that agents may pose in various environments are given by Harrison et al. (1995, pp.3-5, 12) and by Chess et al. (1995, pp.16-17).

[2] Of course, the sword-wielder may be acting as a mercenary. The relationship of interest here is between the sword and the wielder.

[3] Ultimate responsibility still rests with the agent originally given the task. It is part of that responsibility to ensure that any delegated task or subtask is delegated to some entity that can and will complete it.

[4] For example, Unix file permissions and similar restrictions imposed on computational processes by their environments.

[5] Except perhaps where a meta-responsibility exists: The act of requesting access may itself be something that the agent has a responsibility to do; the act of requesting access also reveals something about the agent and/or its user, so the agent may need to decide whether or not to request access.

[6] The term is taken from the terminology of a thought experiment known as "the Prisoners' Dilemma" (Hofstadter, 1987, pp.715-734). In this experiment, an exchange of two items, each of value to the other party, is agreed between two parties. It is agreed that the items, each concealed in a sack, will be exchanged at the same instant. Clearly either party could hand over an empty sack - that is, "defect". If the other party "cooperates" (that is, does not defect), clearly the defector is better off, having received the exchanged item at no cost. If both parties defect, neither has lost anything. If both parties cooperate, both have gained.

[7] An interesting discussion of some technical mechanisms that might be employed to ensure appropriate levels of information reliability in a multi-agent environment is given in (Foner, 1995).


 [previous]  [index]  [next]
Email me!
Last modified 23 December 1995, 12:35
© Copyright 1995 Karl Auer