Report on the Models of Trust for the Web workshop (MTW'06)
Abstract: We live in a time when millions of people are adding information to the Web through a growing collection of tools and platforms. Ordinary citizens publish all kinds of content on Web pages, blogs, wikis, podcasts, vlogs, message boards, shared spreadsheets, and new publishing forums that seem to appear almost monthly. As it becomes easier for people to add information to the Web, it is more difficult, and also more important, to distinguish reliable information and sources from those that are not.
Summary (3 min read)
- Ordinary citizens publish all kinds of content on Web pages, blogs, wikis, podcasts, vlogs, message boards, shared spreadsheets, and new publishing forums that seem to appear almost monthly.
- As it becomes easier for people to add information to the Web, it is more difficult, and also more important, to distinguish reliable information and sources from those that are not.
- The MTW’06 workshop was attended by over thirty researchers for a full day of presentations, panels and spirited discussions.
2 Presented Papers
- The keynote speaker, Ricardo Baeza-Yates (Yahoo! Research), discussed how social networks can be exploited to provide social and economic deterrents for spamming.
- There are several kinds of spam that need to be monitored: scraper scam that copies good data from other sites and adds monetization, synthetic text that provides boilerplate text built around key phrases, query-targeted spam in which each page targets a single tail query, DNS spam where many domains use the same servers, and blog spam.
- Using Flickr as an example, Ricardo showed how the “wisdom of crowds can be used to search” as Flickr users collaboratively search and tag each other’s photos and the anchor text is collective knowledge used to create a search.
- At Yahoo!, spam is detected and characterized using a combination of algorithmic and editorial techniques in order to prevent it from distorting the rank- ing of web pages.
2.1 Session I : Trust Networks
- David Brondsema and Andrew Schamp described their work in using social trust networks to filter spam in “Konfidi: Trust Networks Using PGP and RDF”.
- They proposed that spam can be filtered by reasoning over trust relationships in RDF.
- These relationships include who (both identity and public key) is trusted, value of trust, and with respect to what topic.
- They discussed the FilmTrust project, which is a movie recommendation system developed using their approach.
- Patricia Victor et al. defined their billatice trust model that takes trust, distrust, lack of data, and contradictory data into consideration while calculating trust in “Towards a Provenance-Preserving Trust Model in Agent Networks”.
2.2 Session II : Inferring Trust
- The paper “Propagating Trust and Distrust to Demote Web Spam” by Baoning Wu, Vinay Goel, and Brian Davison addressed the problem of web spam also known as search engine spam in which a target page gets undeserved ranking.
- They described different methods that a parent page can use to divide its trust or distrust among its child pages.
- L. Jean Camp, Cathleen McGrath, and Alla Genkina approached human trust behavior from a social science perspective.
- They described their results in “Security and Morality: A Tale of User Deceit“ in which they present how users “consider failures in benevolence more serious than failures in competence”.
- Deborah McGuinness et al. reported in “Investigations into Trust for Collaborative Information Repositories: A Wikipedia Case Study“ that both provenance of information and revision details are required to improve the trustworthiness of collaborative information systems such as Wikipedia.
2.3 Session III : Trust Models
- Santtu Toivonen, Gabriele Lenzini, and Ilkka Uusitalo explored the role of context in trust determination in their paper “Context-aware Trust Evaluation Functions for Dynamic Reconfigurable Systems”.
- They distinguished between quality attributes, which are static attributes of the trustee and context attributes, which are optional attributes that can change dynamically such as location, and device type.
- They discussed how context-aware trust is calculated from quality attributes, reputation, and recommendations within a certain trust scope at a certain time.
- In “How Certain is Recommended Trust-Information”, Uthe authors Roth and Volker Fusenig suggested that trust information given by a recommender may not be reliable and could negatively affect the trust decision.
- They proposed a strategy for making trust decisions based on a converting a network of relations of direct and recommended trust information into a decision tree by choosing the path as well as whether to trust the recommended information along the path at random.
2.4 Session IV : Trust in Applications
- The paper titled “Quality Labeling of Web Content: The Quatro approach” by Vangelis Karkaletsis et al. reports on a common machine-readable vocabulary for labelling web content that will be represented by user friendly icons for ease of understanding.
- The vocabulary includes several kinds of labels: page-specific such as whether the page uses clear language, whether it includes a privacy statement, content provider specific such as whether his credentials have been verified, business-specific such as whether it complies with rules and regulations of ebusiness, and label-specific such as when the label was issued, and when it was last viewed.
- Ing-Xiang Chen and Cheng-Zen Yang examined biased search engine results and their deviations in “A Study of Web Search Engine Bias and its Assessment”.
- The authors proposed a twodimensional scheme by adopting both indexical bias (differences in the sets of URLs retrieved) and content bias (deviations of content).
- Tsow suggested that users are prone to attacks from malicious software embedded on their routers in “Phishing with Consumer Electronics - Malicious Home Routers” and hinted that trust is not a software only matter but there is an implicit trust in hardware vendors.
3 Future Directions / Open Research Issues
- TheModels of Trust for the Webworkshop helped to understand current state-of-the-art research and to provide a discussion forum for researchers working on trust issues.
- Some open questions and problems include: Trust modeling: Trust awareness:Users’ trust on computers highly depends on previous experiences.
- Furthermore, SWAM makes this task even more difficult.
- Trust in data and systems over time (compliance storage) is currently a major issue (specially after latest laws in which companies need to store data for longer time), also known as Database Security.
- The Web and it’s evolving infrastructure have made it easy to access virtually all of the world’s knowledge and is the first source to which most of us turn when the authors need to know something.
- Search engines and other tools have focused on finding informationrelevantto users’ queries with results ranked at best by their popularity.
- The papers from the WWW’06Models of Trust for the Webworkshop addressed these issues, identified important issues, and offered some partial solutions.
Did you find this useful? Give us your feedback
Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Report on the models of trust for the web workshop (mtw’06)" ?
The Models of Trust for the Web workshop was held in conjunction with the 15th International World Wide Web Conference ( WWW2006 ) on 22nd May, 2006 in Edinburgh, Scotland this paper.