Trust in Multi-Agent Systems [Protected Link] Trust in Multi-Agent Systems

<-Back

@Article{ramchurn-2004a,
  author         = {Sarvapali D. Ramchurn and Dong Hunyh and Nicholas R. Jennings},
  title          = {Trust in Multi-Agent Systems},
  year           = {2004},
  review-dates   = {2004-07-28, 2004-07-31},
  value          = {bb},
  read-status    = {reviewed},
  journal        = {Knowledge Engineering Review},
  hardcopy       = {no},
  key            = {ramchurn-2004a}
}

Summary

This big survey paper covers fairly broadly the areas of trust and reputation in multi-agent systems. (Note: This paper is 58 pages, but is double-spaced and with 10 page bib is more like 25 pager.)

Three key interaction problems:

  1. How should MAS protocols for encounters be engineered? (suggests the protocol itself should be engineered to prevent cheating or malevolence.)
  2. How does agent choose interaction partners?
  3. How does agent decide when to interact with other agents?
I'm Dubious of second sentence in this quote: "...Agents are therefore necesssarily faced with significant degrees of uncertainty in making decisions (i.e. it can be hard or impossible to devise probabilities for events happening). In such circumstances, agents have to trust each other in order to minimise the uncertainty associated with interactions in open distributed systems." [Trust of this kind won't minimise the uncertainty, it will only be a strategy for dealing with uncertainty.]


Key Factors

How placed in context (other work): Well, that's kind of the whole point of this is to establish the context of all work in the field of trust in MAS.

"interactions form the the core of multi-agent-systems"

Two main (and complementary) approaches:
  1. Endow agents with reasoning: Must be able "to reason about the reciprocative nature, reliability, or honesty of their counterparts" in direct interaction and indirect interaction. What amount or degree of trust can one agent place in another. [How, when, who...] - this maps to individual level trust.
  2. Design protocols and mechanisms of interactions. (e.g., bidding systems) - this maps to system-level trust.

I'm focused on Section 2 (individual aspect of trust). Here authors identify three areas: learning and evolutionary techniques, reputation, and socio-cognitive concepts.

    Three approaches to individual trust cited:
  1. Monitor interactions and learn/adapt. (here known as learning based) {molm-2000, carley-1991, prietula-2000, yamagishi-1998, !!dasgupta-1998.
  2. Accept communications from agents about other agents (reputation based)
  3. Characterize known motivations (socio-cognitive based)
[I would quibble that Sen has shown that learning based is always critical to ensure reputation based (earned trust is critical) will work.

Provides some good examples of defection in commercial contracts (pg 9). Has treatment in section 2.a.2 in degrees of cooperation or defection.

"Wu and Sun show that trust can emerge between [agents]" [why is this notable? have to look at paper, I guess. This has bidding environment, so may be commercially interesting.] !! wu-2001.

sen-2002 and-2002b confirm each other's probabilistic results.

birk-2000,birk-2001 handles self-interest bounded by group payoff determining individual payoffs (Continuous N-prisoner's).

The assumptions of complete information and bistable (full defect/cooperate) isn't realistic enough. Instead, "agents need to infer, from the information gathered through their direct interactions, how their opponents are performing and how their performance is affecting their goals. [Maybe change this to direct observations. Agents may be able to acquire info passively, apart from direct interactions that are communications with other agents].

Agents must be able to rate performance of other agents and track performance. !! witkowski-2001: intelligent telecom network, supply and demand, continuous valuations (not just good/bad). Also sabater-2002: REGRET/fuzziness on performance.

Reputation areas of research:

  1. Gathering ratings
  2. Reasoning methods and aggregation of ratings
  3. Mechanisms to promote ratings (???... system-level trust)

Most reputation models use equivalent notion to concept of social networks burt-1982, buskens-1998. Witnesses and transmission, panzarasa-2001. Referrals, yu-2002(?), singh-2001, yu-2003. Honesty and altruism degrees per node, schillo-2000. Groups/neighbors, sabater-2002, Yu and Singh 2002.

Summation as in ebay-2003 is unreliable (kollock-1999, resnick-2002).

Claim in paper is that Yu/Singh don't deal with lying about other agents, whereas schillo does.

Lots of pro-REGRET enthusiasm here. REGRET however, doesn't handle strategic lying.

Socio-cognitive(!)

Subjectivity can help (?!) dasgupta-1998, gambetta-1998. [How?] Castelfranchi and Falcone 1998-2001. BDI/cognitive view of trust vs. "mere quantitative view of trust.". Examples used are "delegation".

Dimensions of trust for this belief/mental state

Claim is that competence is a prerequisite to trust another agent. [This claim is too strong as competence comes in degrees, methinks].

Model an opponent's trust with rational approach brainov-1999, sandholm-1999 [Hey is this socio-cognitive or system-level trust?!].Socio-cognitive approach to modelling trust is unsupported by rationale such as learning and reputation are.

Skimmed the Sytem-Level Trust area [not of core interest], though by skimming it is pretty clear that all the individual trust mechanisms included aspects of the system-level trust area at least implicitly!

Important issue addressed in section 3.b on agents selfishness on not sharing information unless there is some benefit in doing so. [But isn't getting the word out to other reciprocal agents a benefit in itself... and isn't not sharing an indication of weaseldom?] Incentive compatible mechanism here - resnick-2002.

zacharia-2000, he-2003 and desiderata:

  1. [Costly to change identities] [Well, having no track record should be a pretty big anchor when competing with others with established reputations. So switchign should be /very/ costly.]
  2. "New entrants should not be penalized by initially having low reputation values attributed to them." [This is crap and at odds with the first item]
  3. "Agents with low ratings should be allowed to [rebuild reputation]"
  4. [Overhead in fake transactions should be high] [is this a generally necessary requirement for all MAS? Or rather, this is only necessary when fake transactions are otherwise easy to pull off. Perhaps this is just part of the individual agent's need for further reasoning.
  5. "Agents having a high reptutation should have higher bearing than others on reputation values they attribute to an agent." [along lines of earned trust] Event authors here have quibbles
  6. "Agents should be able to provide personalized evaluations." [idealism here, in theory a complete characterization with all its nuances should be captured... event REGRET could miss something one might want to say.]
  7. "Agents should keep a memory of reputation values and give more importance to the latest ones obtained." [Sounds good. Anyone try to formalize this within analytic or simulation framework?]

Side payments to ensure reputation information is well, reputable - jurca-2003,-2003.

Useful discussion of security requirements for trust/reputation, but not on core interest area.

Figure 1 isn't all that helpful, at least with the narrative around it. It does beg the question why "individual-level" wasn't merely called "trust reasoning" or some such thing instead.

Authors give an semantic web example.

Problem Addressed: Capturing the current state of trust and reputation models and mechanisms, their strengths and weaknesses. Organizing some of this into a coherent framework.

Main Claim and Evidence: No single claim, not a research/experimenat paper. See throughout this write-up.

Assumptions:

The Multi-Agent Systems in scope are equivalent to Open Distributed Systems.

  1. Agents represent different stakeholders [that have different goals.]
  2. Agents may come and go at any time.
  3. Allows heterogenous population of agents (beliefs, goals, capabilities vary)
  4. Allows agents to trade and collaborate in a wide variety of ways
Problems:
  1. Lack of full information
  2. Lack of sufficient computation resources

Trust definition: "Trust is a belief an agent has that the other party will do what it says it will (being honest and reliable) or reciprocate(being reciprocative for the common good of both, given an opportunity to defect to get higher payoffs (adapted from [Dasgupta, 1998])." !! dasgupta-1998.

Next steps:

Remaining open questions:
(According to authors)


Quality

Originality is good.
Contribution/Significance is excellent.
Quality of organization is excellent.
Quality of Writing (How hard to extract answer) is excellent.
<-Back