Detecting Deception in Reputation Management [Protected Link] Detecting Deception in Reputation Management

<-Back

@InProceedings{yu-2003a,
  author         = {Bin Yu and Munindar P. Singh},
  title          = {Detecting Deception in Reputation Management},
  year           = {2003},
  review-dates   = {2004-06-??, 2004-07-27},
  value          = {bb},
  booktitle      = {Proceedings of AAMAS '03},
  month          = {July},
  hardcopy       = {yes},
  key            = {yu-2003a}
}

Summary

This paper uses Dempster Shafer theory and a distinction between local and total belief to represent and propagate trust beliefs. Local belief comes from an agent's own observations. Total belief is the combination of local belief and the testimony of witnesses.

TrustNets (TNs) are built on an agents' acquaintances (=16) and neighbors (=4) and trustworthiness assessments are gathered by branching out through the acquaintance/neighbor network (the references to other agents is controlled by a branching factor and depth limit).

A weighted majority algorimth (WMA), extended to belief functions, is used to improve predictions over time.

Experiments looked at modelling interest (for queries) and expertise (for responses) in terms of 5D vector. The better query and response match closely, then an agent will be given higher ratings.

The base network is presumably based on a small-world network constructed in a classical manner from a 100 node ring. 10 agents give exaggerated positive, 10 agents give exaggerated negative, and ten give complementary ratings.


Key Factors

How placed in context (other work): marsh-1994a, *Rahman and Hailes, *Social Interaction Framework (SIF) of Schillo, Funk and Rovatsos, Dempster-Shafer, Aberer and Despotvic, **Barber and Kim, Pujol et al, Sabater and Sierra, mui-2002c, sen-2002a, **Brainov and Sandholm

Problem Addressed: How to address the problem of deception in testimony propagation and aggregation. Trust and Reputation ratings and propagation. How to combine evidence from own agent's observations with [reported] observations of other agents.

Main Claim and Evidence: Their scheme using Dempster-Shafer theory, local vs. total belief organized around TrustNets and using form of WMA for learning, and a proscription on propagating hearsay can effectively detect deception in testimony. This seems to be borne out by the experiments that show that: 1) the error was largely unaffected by the number of witnesses(!?, even with explanation) [Figure 5], 2) Weight learning improves performance [Figure 6]. 3) Weights of liars are reduced, but only the complementary witnesses really have their weights heavily clipped, and generally the greater exaggeration led to lesser weights as the weights were adapted.

Assumptions: Uniform and consistent deception. Small-world network. Long time horizons. No decomposition other than local vs. total belief.

Next steps: Authors stated next steps: 1) [Integrating reputation into MAS and electronic commerce systems.] 2) "study the dynamics of ratings with decay rates and... [3)] how an agent can adapt its strategy to the dynamic social structure of the given MAS and whether an agent should trust another agent based on the collected reputation information"

Remaining open questions: Could an agent mix exaggerated negative, exaggerated positive, or other sets of misrepresentations differentially and not be detected as a deceiver (assume motivation may vary for each referral)?

What if agent deception types weren't even (different sized populations, with possible nulls).

Must have missed how agents decided to respond on expertise or refer (random?)


Quality

Originality is good.
Contribution is excellent.
Quality of organization is excellent.
Quality of writing is excellent.
<-Back