In the world of commercial software almost anything with some element of automation is tagged as an "agent", especially in the field of client/server applications. In the field of computing research, the question "what is an agent?" is still vexed (Wooldridge & Jennings, 1995, p.4).
One difficulty is the question of "depth"; as Turing pointed out, once we know what method is actually used to achieve a particular end, no matter how sophisticated, we tend to dismiss it as uninteresting . Nearly forty years later, Brooks was ruefully pointing out that little had changed! .
The more that we know about what "agents" do, the less we think about them of as "agents" and the more we think of them as "just software". At what level in the intricate heirachy of components between hardware and the human mind do we stop (or start) treating agents as a special class of software?
Why should we be interested in agents at all? Why is the idea of an agent, rather than "just software" satisfying and exciting? Part of the answer is surely the "Golem effect"; the excitement of creating something independently powerful, able to range abroad to carry our will. A more practical reason is the host of potential benefits offered by software constructed with some or all of the attributes being collected under the rubrik "agent". Though this aspect is not deeply explored in this paper, readers may find interesting an excellent summary of some of the practical benefits (of mobile agents in particular) given by Harrison et al. (1995), who note that while many specific problems might equally well be solved by specific "ordinary software" as by agents, agent-based approaches encompass solutions to a great many of those problems in one sweep .
This paper looks at some of the main attributes proposed for agents, and endeavours to determine which of them are essential to the concept of agency.
The specific attributes looked at are independence (autonomy), intelligence, communication, learning, mobility and representation of the user.
Independence, or autonomy, is a fundamental attribute of agency in almost all the literature. Independence in this context is the ability to exist and operate in some sense separately from any controlling entity. Some researchers (notably Genesereth and Ketchpel (1994)) don't explicitly require independence in their definitions of agency, but even in such limiting definitions as theirs, a strong thread of implied independence exists - Genesereth and Ketchpel, for example, emphasise "distinct" processes and threads and suggest their operation as separate entities.
Independence embodies several of the other identified attributes - certainly intelligence. Without the capability to make decisions itself, any separated entity will be unable to operate in that separated state.
Whether intelligence implies an ability to learn, (which in this paper is defined rather loosely as the capacity to improve responses to challenges) is probably a moot point with regard to agents. However, issues of whether agents should learn and if so what (not to mention how) are very high on the agenda of most agent researchers, particularly those working on collaboration between agents (see Jennings, 1992 or Lashkari et al., 1994 for example).
Mobility is an interesting attribute, and much development is being done on mobile agents (sometimes called itinerant agents). It seems debatable whether mobility is a necessary attribute of agency, though mobility is obviously a very useful ability for an agent to have in many applications.
The matter of what constitutes "mobility" is also not as intuitive as it may seem - one common usage of the term applies to any object which is not fixed in space, but is a program which can be copied from machine to machine "mobile" or merely "transportable"? Do we consider a program like MSDOS to be mobile merely because it can operate on many identical machines?
Communication as a feature of agents is treated in a very wide variety of ways. Genesereth and Ketchpel (ibid.) regard communication in special agent languages to be the defining characteristic of agency, but it is easy to conceive of an agent which might carry out tasks intelligently, independently, possibly even moving from environment to environment to do so, but would never communicate with any other agent or user - for example, an agent designed to recover disk space in a system of networked computers. For researchers working on collaborative agents, communication is also a fundamental requirement.
Clearly all these attributes - with the possible exception of mobility - are intertwined. Clearly also, each attribute can exist in a given agent very little or very much or somewhere between those two extremes. There seems no simple way to say "yes, this agent is independent (or intelligent or mobile or whatever)".
Much current research focuses on the technical aspects of agents, and the above attributes are essentially technical attributes. One quality that is not directly addressed by the above is the issue of representation - the extent to which an agent represents some other entity, such as a human user. As soon as agents are seen as representing other entities issues such as responsibility for the actions of those agents take on new importance.
There are many security issues raise by independently operating, intelligent and possibly mobile software units such as agents; however, most of these issues are comparatively old and well-understood. Such issues include identification, data integrity, and so on. With software actually taking the place of human users in some negotiations, a whole new level of integrity is required - integrity in relationships between entities becomes an important facet of any system involving agents. This paper looks at what it means in terms of agency to be responsible and to have obligations.
Ultimately, this paper is asking the question "what is an agent?" Applied as it commonly is, the term is too broad. For the term to be useful, there needs to be a common appreciation of the kinds of things that are appropriately termed "agents" - equally there must be a common appreciation of the kinds of things that are not agents.
It is the contention of the author that the defining characteristic of agency is representation of the user. More rigorous argument than can be presented in this paper is needed to fully support this contention. However, by looking at some of the attributes that are commonly seen as defining agency, this paper seeks to create a modest framework for considering candidates for the title of "agent".
 "Artificial Intelligence researchers are fond of pointing out that AI is often denied its rightful successes. The popular story goes that when nobody has any good idea of how to solve a particular sort of problem (e.g. playing chess) it is known as an AI problem. When an algorithm developed by AI researchers successfully tackles such a problem, however, AI detractors claim that since the problem was solvable by an algorithm, it wasn't really an AI problem after all. Thus AI never has any successes." (Brooks, 1986)
 "We have seen [..] that while there are many individual areas where mobile agents offer advantages, there are few if any overwhelming advantages among these and that in almost every case, an equivalent solution can be found that does not require mobile agents. However, if we stand back and look at the sum of these advantages, that is all the functions that a mobile agent framework enables, then a much stronger case emerges." (Harrison et al., 1995)