"Learning to trust is one of life's most difficult tasks." [I. Watts]

IntelligentAdvice.org is an international group of researchers with different backgrounds but similar interests, working on the topic of "Intelligent Online Advice". Our goal is to exploit new techniques (mainly based on Artificial Intelligence) in order to produce better online advice for customers and users in general.

Humans systematically make substantive errors in reasoning due to cognitive and perceptual biases. This has severe implications for human reasoning capabilities, and may lead to unnecessary misinterpretations and incorrect conclusions in situations of uncertainty and incomplete knowledge. This problem also applies to reasoning and decision making based on trust and reputation, which is an area where humans typically process evidence from various sources, often subconsciously. In case of online environments it is especially challenging to assess the trustworthiness of other parties, because we are removed from familiar styles of interaction. The relative simplicity and low cost of establishing a good looking Internet presence gives little evidence about the solidity of the person or organization behind it. The difficulty of collecting evidence about online entities makes it hard to distinguish between high and low quality on the Internet, which has implications for the quality and stability of online communities. As a result, the topic of trust in open computer networks is receiving considerable attention in the academic community, as well as within online social networks and the e-commerce industry.

Trust and reputation systems represent a significant trend in reasoning and decision support for Internet mediated service provision and interaction. Research shows that the stability of large online communities to a large degree depends on reputation systems, because centralized manual moderation of online communities would be too costly to be viable. The basic idea of online reputation systems is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. Reputation systems provide a distributed collaborative method for sanctioning and praise, which naturally provides an incentive for good behaviour, and therefore tends to have a positive effect on the quality and stability of online communities and markets.

Preferences are ubiquitous in everyday decision making, and are typically integrated with trust and reputation in human reasoning. Since preferences, trust and reputation are essential ingredients in every reasoning, there is a need to develop a reasoning framework where they can be integrated. Preferences are mainly studied in artificial intelligence in the context of multi-agent decision making, where each agent expresses its preferences over a set of possible decisions and the goal is to find the best collective decision. Preferences have been classically studied in a social choice context and in particular in voting theory, where several voters express their preferences over the candidates and a voting rule is used to elect the winning candidate. Since this scenario is similar to multi-agent decision making, many interesting papers in the multi-agent area have tried to adapt social choice results to multi-agent setting by taking into account issues that do not occur in a social choice context: a large set of candidates with a combinatorial structure, formalisms to model preferences compactly, preference orderings including indifference and incomparability, uncertainty, as well as computational concerns.

  Site Map