RETSINA D'Agents ERA CHiPs Combinatorial Auctions Teamcore
COORDINATING AGENT POPULATION
How many agents? [1] 10s 100s 1000s 10s 10s 10s
Are human agents part of the population? yes no yes through surrogate processes no through teamcore
Heterogeneous beliefs? [2] yes yes yes yes yes yes
Heterogeneous capabilities?[3] yes yes yes yes yes yes
   If heterogeneous, overlapping capabilities?[4] yes yes yes usually yes
Heterogeneous preferences?[5] yes yes yes yes yes yes
   Necessarily conflicting preferences?[6] no no no no no (but uninteresting if not yes) no
Heterogeneous languages?[7] no no yes no n/a (bids in common) yes
Heterogeneous ontologies?[8] yes n/a yes no n/a (bids in common) no
Heterogeneous architectures?[9] yes yes yes no no yes
Dynamically changing population:
   Agent arrivals and departures?[10] both both both both both both
   During coordination?[11] yes yes yes yes yes yes
AGENT KNOWLEDGE
Of environment:
   Static aspects?[12] yes yes no yes possibly yes
   Dynamic aspects?[13] yes partial no partial possibly yes
Explicit model of self:
     Capabilities?[14] yes yes yes yes possibly yes
     Beliefs?[15] yes yes no yes yes yes
     Preferences?[16] yes yes no yes yes yes
     Plans?[17] yes yes no *yes* possibly yes
Explicit model of others':
     Capabilities?[18] yes yes no no no yes
     Beliefs?[19] no n/a no not yet no yes
     Preferences?[20] no indirect no no yes partial
     Plans?[21] partial no no *partial* no partial
Knowledge of env/self/others learned? yes yes yes no yes
Knowledge of env/self/others learned through observation?[22] yes no env yes partial for env/others
Knowledge of env/self/others learned through communication?[23] no yes (manual intervention) others no others
COMMUNICATION
How semantically rich is the communication language (e.g., a number (price) is low, while plan is high).[24] high low low high low high
Are the messages of different types?[25] yes no (few) yes no no yes
Is point-to-point communication used?[26] yes yes yes yes yes yes
Is broadcast communication used?[27] sometimes no yes no could be yes
Is multicast communication used?[28] yes no  yes no could be yes
Is communication asynchronous? yes yes
AGENT COMPLEXITY
Mapping of preferences into actions:
  Is given?[29] no no application dependent no no no
  Is based on current beliefs?[30] yes yes application dependent yes yes yes
  Requires planning?[31] yes *yes* application dependent *yes* yes may be
  Requires learning?[32] no some application dependent no yes could be, but not supported
Number of different kinds of objectives an agent is capable of achieving ("one" means the agent has a very specific role in the network; "few" means the agent can fulfill any of a number of roles (achieve different kinds of tasks) in the  network; "many" means that the agent could take on any of most of the roles (tasks) in the network).[33] many few many few
Agent capabilities allow alternative ways of accomplishing an objective?[34] yes *yes* yes *yes* *yes* yes
An agent can respond to domain dynamics by choosing an alternative way of accomplishing an objective unilaterally, at runtime?[35] yes *yes* in some cases yes
An agent can determine whether its current choice of how to accomplish an objective is failing, and can unilaterally change its choice?[36] yes *yes* in some cases yes
COORDINATION PROBLEM COMPLEXITY
Are agents different processes running on different machines?[37] yes yes yes yes could be yes
Can agents fail to accomplish their tasks?[38] yes rarely yes yes yes yes
Is coordination:
   Episodic?[39] yes no could be yes yes could be
   Periodic?[40] yes no could be yes yes could be
   Continual?[41] yes *event driven* could be not yet yes could be
Fraction of possible issues each agent is involved in coordinating over at the same time?[42] small small small small can be large small
How many agents are involved in coordinating  over a particular issue at the same time?[43] few few can be large few can be large application dependent
Does coordination involve allocating/scheduling sufficient resources/capabilities so as to meet some performance measure(s)?[44] yes yes yes yes yes
   Are tasks statically assigned to agents?[45] no no no so far so far no
   Are tasks and their needs known at outset?[46] no yes no tasks yes, needs no typically no
   Are tasks and their needs discovered over time?[47] yes no yes tasks no, needs yes in some cases yes
   Are sources for satisfying needs known at outset?[48] no no no no typically
   Are sources for satisfying needs discovered over time?[49] yes yes yes yes in some cases
   Can needs or sources for satisfying them arrive and disappear dynamically?[50] both yes both both yes yes
   Is there uncertainty in how well particular needs will be satisfied by particular sources?[51] yes yes application dependent yes yes could be
   Are there complementarities (how much one thing is needed depends on acquiring other things)?[52] yes yes application dependent yes yes could be
   Are there externalities (how much one thing is needed depends on whether others covet/acquire some things)?[53] partial yes application dependent seldom yes could be
   Can allocation/scheduling decisions lead to some agents being unable to achieve their goals to meet performance measures?[54] yes sometimes application dependent yes yes could be
   Can allocation/scheduling decisions lead to some agents being unable to achieve their goals at all?[55] partial rarely application dependent seldom possibly could be
PERFORMANCE MEASURES
Is an acceptable solution to the coordination problem:
  optimal?[56] no nearly application dependent no approximately no
  satisficing (meets some threshold measure)?[57] sometimes yes application dependent sometimes possibly application dependent
  satisfactory (minimally satisfies constraints/goals)?[58] always yes application dependent always no application dependent
Is a solution to the coordination problem monitored and repaired/replaced if it is recognized that it becomes suboptimal or fails altogether? yes not yet (it  shouldn't  fail - below) no (but learning methods deal with this to some extent) if fails altogether (suboptimality not covered)
Is an acceptable solution to the coordination problem robust in the face of changing conditions?[59] yes yes application dependent *yes* not necessarily
Is an acceptable solution to the coordination problem achieved  
  at any cost?[60] no no application dependent no in some of the techniques no
  at a cost that is less than the cost of  failure to coordinate?[61] partial yes application dependent *yes* in some of the techniques some decision theoretic reasoning used
  at the lowest possible cost?[62] no no application dependent no in some of the techniques no
How is the cost of solving the coordination problem measured:
  in elapsed time?[63] sometimes yes application dependent yes yes (sometimes captured as coordination actions costs) sometimes
  in number of messages exchanged?[64] sometimes yes application dependent sometimes no sometimes
  in some measure of total effort expended?[65] no yes application dependent no no Domain experts may provide feedback

[1]
Edmund H. Durfee:
This is order of magnitude of the number of agents to which the technique has actually been applied to date.
[2]
Edmund H. Durfee:
These are beliefs about the external world, including other agents, that affect coordination/control.  If they know the same things about the world (see the same world), or if they all know everyones preferences (a commonly known payoff matrix, for instance), or if they have no beliefs, then the answer to this is no.
[3]
Edmund H. Durfee:
If agents are essentially interchangable in what they can do (any task can be done equally well by any agent), then they do not have heterogeneous capabilities.
[4]
Edmund H. Durfee:
If capabilities are unique, such that, for all tasks, a task can only be accomplished by one agent, then capabilities do not overlap.  When tasks can be assigned to any of a number of agents (possibly at different costs or quality of service levels), the population is more complex.
[5]
Edmund H. Durfee:
If all agents would agree on the best outcome of their joint activities (if they have identical preferences to their coordination decisions), then preferences are not heterogeneous.
[6]
Edmund H. Durfee:
If the coordination technique only helps in cases where there is conflict (over resource allocations, for example), then this is yes.
[7]
Edmund H. Durfee:
These are languages in which coordination is being done.  So, if they all talk "prices" or "plans," then this would be "no."  If some talk about prices, others about plans, others about organizational roles, then this would be yes.
[8]
Edmund H. Durfee:
This refers to the semantics of their languages.  If there is a shared understanding of what it means to provide resource x or capability y, or what a bid of z means, then this would be "no."  If agents can't count on a common ontology, then this is "yes."
[9]
Edmund H. Durfee:
This refers to the basic decision-making techniques of the involved agents as currently implemented.  If they all work basically the same (they all compute optimal bids, or generate plans, etc.), then this is "no."
[10]
Edmund H. Durfee:
Can agents arrive into and depart from the system over time, so that a static model of available capabilities and their allocation in the system is impossible.
[11]
Edmund H. Durfee:
While agents are engaged in making coordination decisions (converging on joint plans, contracting tasks, seeking equilibrium prices), can some of the agents involved depart and can others arrive, without triggering a complete restart of the coordination process?
[12]
Edmund H. Durfee:
These refer to aspects that influence coordination/control decisions, such as statically defined organizational roles, or positions in a hierarchy, or nearby "acquaintances."
[13]
Edmund H. Durfee:
These refer to aspects that affect coordination/control decisions, such as changes to available resources/capabilities, or running prices, etc.
[14]
Edmund H. Durfee:
Can an agent represent what it can do, so that it can tell whether it can accomplish a particular task, and can it potentially advertise its abilities to others?
[15]
Edmund H. Durfee:
Can an agent access its own belief structure, to know what it believes and what it does not believe, and to potentially communicate its beliefs to others?
[16]
Edmund H. Durfee:
Can an agent access its own preferences, such that it can anticipate what states of the world it would prefer over others, and describe/explain these to other agents?
[17]
Edmund H. Durfee:
Can an agent inspect its own plans to anticipate its sequential actions, and can it potentially tell others about its intended plans?
[18]
Edmund H. Durfee:
Can an agent represent and utilize information it receives/learns about what other agents are capable of doing?
[19]
Edmund H. Durfee:
Can an agent assimilate beliefs conveyed by another agent?
[20]
Edmund H. Durfee:
Does an agent explicitly model the goals/preferences of other agents in order to coordinate better with them?
[21]
Edmund H. Durfee:
Does an agent represent the inferred/communicated planned activities of others and use these to coordinate with them?
[22]
Edmund H. Durfee:
Which of environment/self/others does an agent form models of  through observation/experience?
[23]
Edmund H. Durfee:
Which of environment/self/others does an agent form models of through communication?
[24]
Edmund H. Durfee:
As per Katia's suggestion. I'm a little leery about this, since in some contexts a number can convey a lot of information while a bucket of text very little, depending on the ontology.  But let's see what other people think.
[25]
Edmund H. Durfee:
I assume these are message types of the kind that KQML-like languages would refer to.  I'm a little unclear on interpretting this, since I think all would say "yes" to this (market-based systems would have both bids and some kind of clearing/matching message, etc.).  Perhaps this should instead ask whether the communication language permits versatile communication plans (I would think, then, that many approaches would say "no" in the sense that most would follow well-defined protocols).
[26]
Edmund H. Durfee:
I interpret this as asking whether point-to-point communication is expected for coordination.
[27]
Edmund H. Durfee:
I interpret this to ask whether broadcast communication is assumed possible for coordination.
[28]
Edmund H. Durfee:
I interpret this to ask whether multicast communication is assumed available for coordination/control.
[29]
Edmund H. Durfee:
Since this refers to coordination actions, it asks whether an agent knows how to coordinate simply by knowing what its goals/preferences are, irregardless of its current environment.
[30]
Edmund H. Durfee:
This asks whether an agent considers both its current beliefs and preferences in making decisions. This would be done, for example, by a  reactive agent.
[31]
Edmund H. Durfee:
In addition to beliefs and preferences, this asks whether an agent is assumed to construct plans (possibly partially-ordered, conditional, probabilistic…) to put actions together in combinations that together achieve preferences in the current/expected circumstances. The actions are assumed to be those that affect coordination.
[32]
Edmund H. Durfee:
If, on top of everything else, an agent does not initially know how its actions might lead to satisfaction of its preferences, and therefore has to lean the effects of actions, then this would be "yes."
[33]
Edmund H. Durfee:
This gives a sense of how complicated each agent is.  Note, this question is not asking whether the same agent architecture can achieve various objectives, but rather whether a particular agent instance can. Typically, coordination is easier if each agent is a specialist (they either match a task or they don't) or if each agent is a generalist (assignment is based on availability rather than suitability). The in-between is harder, as availability and suitability need to be balanced given the options.
[34]
Edmund H. Durfee:
An agent is generally more complex if, given a task, it could accomplish the task in a number of ways.  Since this could require scheduling/allocating alternative combinations of resources/capabilities, selecting the right alternative is important. An agent could even explore multiple alternatives at the same time, leading to complexities in coordination as agents might "test the waters" with many others.
[35]
Edmund H. Durfee:
An agent is more complex if it leaves its options open until runtime, so that its choice of how it will accomplish its goals is made during execution. Control in this case requires either rapid on-line coordination, or prior coordination decisions that will work for any of the choices available to the agent.
[36]
Edmund H. Durfee:
This makes things even more challenging, in that not only might an agent make choices of methods at runtime, but that it might change its mind partway through if things aren't working out.
[37]
Edmund H. Durfee:
I assume that this will be a "yes" across the board, but it could be that some current systems "simulate" the agents rather than actually having separate processes running them.
[38]
Edmund H. Durfee:
If failure is a possbility, then the coordination mechanism needs to be robust in the face of failure.
[39]
Edmund H. Durfee:
Episodic means that there are clear start and end times to each "coordination episode" such that there is no carryover between episodes.
[40]
Edmund H. Durfee:
Periodic means that there are specific points in time (either clock-driven or event-driven) where the coordination process runs, where this could occur many times over an overall episode.  For example, in a market-system, it could be that the market clears periodically.
[41]
Edmund H. Durfee:
By continual, I mean that there is no clear starting or ending point for coordination activity, but rather that coordination is ongoing and there might never be a time when the whole system is "coordinated."
[42]
Edmund H. Durfee:
At one extreme, every possible resource/capability/action could be of concern to all agents.  At the other extreme, each agent might only be concerned about one resource/capability/action.
[43]
Edmund H. Durfee:
If all agents are concerned with the same resource/capability/etc., then as the number of agents grows, the coordination problem generally is more complex.
[44]
Edmund H. Durfee:
I think this is a given in the Grid, but just in case...
[45]
Edmund H. Durfee:
Suggested by Katia.  If tasks could be assigned dynamically, then an agent might accept a subtask from another only to discover that it has been assigned a new task by the "system" unexpectedly. Such possibilities obviously make coordination more complex.
[46]
Edmund H. Durfee:
This is assumed to refer to externally-given tasks (as opposed to the tasks that agents might form and pass around as they decompose and solve problems).  An agent might only discover what resources/capabilities it needs as it pursues tasks (if it has alternative ways of accomplishing tasks).
[47]
Edmund H. Durfee:
See comment of previous item.
[48]
Edmund H. Durfee:
When an agent identifies a needed resource/capability/etc., does it know (or does some entity in the system know) all the possible places where the need can be met?
[49]
Edmund H. Durfee:
If agents (presumably with capabilities/resources/etc.) arrive into the system and depart from the system over time, then this would have to be yes.
[50]
Edmund H. Durfee:
Related to previous item.
[51]
Edmund H. Durfee:
This was intended to mean whether an agent would know how satisfied it will be with the services delivered by an agent that claims to provide a desired service.  In market terms, is it assumed that all of the goods in the market are substitutable?  In a brokering system, is it assumed that an agents that advertise the same capability will achieve the same result if given the same task? In a plan-based system, is it assumed that some agents'actions might be non-deterministic?
[52]
Edmund H. Durfee:
The coordination problem generally gets harder if each agent needs several resources/capabilities/etc. over some period of time, where if it fails to get one, it has little or no use for the others.
[53]
Edmund H. Durfee:
This says that an agent's preferences for a particular capability/resource/etc. depend on the preferences of others for the same thing. This would imply that an agent's preferences change dynamically as others' demands or acquisitions change.  For example, if agent A learns that agent B has acquired service x, then the value of service x to agent A changes even if agent A is still pursuing the same task.
[54]
Edmund H. Durfee:
Basically, is the problem hard enough that some coordination decisions could make it such that some agents cannot perform their tasks well enough to satisfy performance measures, or is the system flush enough with resources (or are performance measures lax enough) that practically any coordination decisions could lead to success.
[55]
Edmund H. Durfee:
Basically, is the problem hard enough that some coordination decisions could make it such that some agents cannot complete their tasks at all, or do bad coordination decisions simply degrade some performance measures?
[56]
Edmund H. Durfee:
Is the coordination mechanism responsible for optimally allocating/scheduling resources/capabilities/etc.?
[57]
Edmund H. Durfee:
Is there some level of aspiration that the coordination decisions (such as allocation/scheduling of resource/capabilities/etc.) needs to achieve, but it can stop when it gets to that level rather than seeking an optimum.
[58]
Edmund H. Durfee:
Can the coordination mechanisms end as soon as any (combination of) coordination decisions that lead to the satisfaction of constraints and goals, without seeking an optimum or even a solution above an aspiration level. (Essentially, this is satisficing with the lowest consistent aspiration level.)
[59]
Edmund H. Durfee:
This asks whether the coordination mechanisms are supposed to find a "solution" that does not need to change even if the environment undergoes change.  This is in contrast to mechanisms that detect such changes and generate new solutions. (In planning terms, the difference between a robust plan that will work under a variety of conditions and a plan-repair/replanning methodology that revises the plan when conditions change.)
[60]
Edmund H. Durfee:
Will the coordination mechanism run to completion without monitoring or adjusting its own costs?
[61]
Edmund H. Durfee:
Does the coordination mechanism monitor/predict its own costs and make adjustments to increase the chances that the costs it incurs are less than the costs that failure to coordinate would be expected to incur?
[62]
Edmund H. Durfee:
In this case, the coordination mechanism is optimizing its own performance.
[63]
Edmund H. Durfee:
The longer it takes to solve the coordination problem, the worse the mechanism's performance.
[64]
Edmund H. Durfee:
If bandwidth is at a premium or communication delays are significant, this can be important.  This is especially important if the resources being coordinated over are communication resources!!
[65]
Edmund H. Durfee:
These are notoriously hard to define.