Personally identifiable information
About: Personally identifiable information is a research topic. Over the lifetime, 12829 publications have been published within this topic receiving 229370 citations.
Papers published on a yearly basis
12 May 2011
TL;DR: Pariser et al. as discussed by the authors described the filter bubble as a "unique, personal universe of information created just for you by this array of personalizing filters" and pointed out the problem of not having any sense of what is being edited out or why it is being censored.
Abstract: Author Q&A with Eli Pariser Q: What is a Filter Bubble? A: Were used to thinking of the Internet like an enormous library, with services like Google providing a universal map. But thats no longer really the case. Sites from Google and Facebook to Yahoo News and the New York Times are now increasingly personalized based on your web history, they filter information to show you the stuff they think you want to see. That can be very different from what everyone else sees or from what we need to see. Your filter bubble is this unique, personal universe of information created just for you by this array of personalizing filters. Its invisible and its becoming more and more difficult to escape. Q: I like the idea that websites might show me information relevant to my interestsit can be overwhelming how much information is available I already only watch TV shows and listen to radio programs that are known to have my same political leaning. Whats so bad about this? A: Its true: Weve always selected information sources that accord with our own views. But one of the creepy things about the filter bubble is that were not really doing the selecting. When you turn on Fox News or MSNBC, you have a sense of what their editorial sensibility is: Fox isnt going to show many stories that portray Obama in a good light, and MSNBC isnt going to the ones that portray him badly. Personalized filters are a different story: You dont know who they think you are or on what basis theyre showing you what theyre showing. And as a result, you dont really have any sense of whats getting edited out or, in fact, that things are being edited out at all. Q: How does money fit into this picture? A: The rush to build the filter bubble is absolutely driven by commercial interests. Its becoming clearer and clearer that if you want to have lots of people use your website, you need to provide them with personally relevant information, and if you want to make the most money on ads, you need to provide them with relevant ads. This has triggered a personal information gold rush, in which the major companies Google, Facebook, Microsoft, Yahoo, and the like are competing to create the most comprehensive portrait of each of us to drive personalized products. Theres also a whole behavior market opening up in which every action you take online every mouse click, every form entry can be sold as a commodity. Q: What is the Internet hiding from me? A: As Google engineer Jonathan McPhie explained to me, its different for every person and in fact, even Google doesnt totally know how it plays out on an individual level. At an aggregate level, they can see that people are clicking more. But they cant predict how each individuals information environment is altered. In general, the things that are most likely to get edited out are the things youre least likely to click on. Sometimes, this can be a real service if you never read articles about sports, why should a newspaper put a football story on your front page? But apply the same logic to, say, stories about foreign policy, and a problem starts to emerge. Some things, like homelessness or genocide, arent highly clickable but are highly important. Q: Which companies or Websites are personalizing like this? A: In one form or another, nearly every major website on the Internet is flirting with personalization. But the one that surprises people most is Google. If you and I Google the same thing at the same time, we may get very different results. Google tracks hundreds of signals about each of us what kind of computer were on, what weve searched for in the past, even how long it takes us to decide what to click on and uses it to customize our results. When the result is that our favorite pizza parlor shows up first when we Google pizza, its useful. But when the result is that we only see the information that is aligned with our religious or social or political beliefs, its difficult to maintain perspective. Q: Are any sites being transparent about their personalization? A: Some sites do better than others. Amazon, for example, is often quite transparent about the personalization it does: Were showing you Brave New World because you bought 1984. But its one thing to personalize products and another to personalize whole information flows, like Google and Facebook are doing. And very few users of those services are even marginally aware that this kind of filtering is at work. Q: Does this issue of personalization impact my privacy or jeopardize my identity at all? A: Research psychologists have known for a while that the media you consume shapes your identity. So when the media you consume is also shaped by your identity, you can slip into a weird feedback loop. A lot of people see a simple version of this on Facebook: You idly click on an old classmate, Facebook reads that as a friendship, and pretty soon youre seeing every one of John or Sues posts. Gone awry, personalization can create compulsive media media targeted to appeal to your personal psychological weak spots. You can find yourself eating the equivalent of information junk food instead of having a more balanced information diet. Q: You make it clear that while most Websites user agreements say they wont share our personal information, they also maintain the right to change the rules at any time. Do you foresee sites changing those rules to profit from our online personas? A: They already have. Facebook, for example, is notorious for its bait-and-switch tactics when it comes to privacy. For a long time, what you Liked on Facebook was private, and the site promised to keep it that way. Then, overnight, they made that information public to the world, in order to make it easier for their advertisers to target specific subgroups. Theres an irony in the fact that while Rolex needs to get Tom Cruises permission to put his face on a billboard, it doesnt need to get my permission to advertise my endorsement to my friends on Facebook. We need laws that give people more rights in their personal data. Q: Is there any way to avoid this personalization? What if Im not logged into a site? A: Even if youre not logged into Google, for example, an engineer told me there are 57 signals that the site uses to figure out who you are: whether youre on a Mac or PC or iPad, where youre located when youre Googling, etc. And in the near future, itll be possible to fingerprint unique devices, so that sites can tell which individual computer youre using. Thats why erasing your browser cookies is at best a partial solutionit only partially limits the information available to personalizers. What we really need is for the companies that power the filter bubble to take responsibility for the immense power they now have the power to determine what we see and dont see, what we know and dont know. We need them to make sure we continue to have access to public discourse and a view of the common good. A world based solely on things we Like is a very incomplete world. Im optimistic that they can. Its worth remembering that newspapers werent always informed by a sense of journalistic ethics. They existed for centuries without it. It was only when critics like Walter Lippman began to point out how important they were that the newspapers began to change. And while journalistic ethics arent perfect, because of them we have been better informed over the last century. We need algorithmic ethics to guide us through the next. Q: What are the business leaders at Google and Facebook and Yahoo saying about their responsibilities? A: To be honest, theyre frustratingly coy. They tend to frame the trend in the passive tense: Googles Eric Schmidt recently said It will be very hard for people to watch or consume something that has not in some sense been tailored for them, rather than Google is making it very hard Mark Zuckerberg perfectly summed up the tension in personalization when he said A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa. But he refuses to engage with what that means at a societal level especially for the people in Africa. Q: Your background is as a political organizer for the liberal Website MoveOn.org. How does that experience inform your book? A: Ive always believed the Internet could connect us all together and help create a better, more democratic world. Thats what excited me about MoveOn here we were, connecting people directly with each other and with political leaders to create change. But that more democratic society has yet to emerge, and I think its partly because while the Internet is very good at helping groups of people with like interests band together (like MoveOn), its not so hot at introducing people to different people and ideas. Democracy requires discourse and personalization is making that more and more elusive. And that worries me, because we really need the Internet to live up to that connective promise. We need it to help us solve global problems like climate change, terrorism, or natural resource management which by their nature require massive coordination, and great wisdom and ingenuity. These problems cant be solved by a person or two they require whole societies to participate. And that just wont happen if were all isolated in a web of one.
TL;DR: An overview of recommender systems as well as collaborative filtering methods and algorithms is provided, which explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.
Abstract: Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.
TL;DR: The results of this study indicate that the second-order IUIPC factor, which consists of three first-order dimensions--namely, collection, control, and awareness--exhibited desirable psychometric properties in the context of online privacy.
Abstract: The lack of consumer confidence in information privacy has been identified as a major problem hampering the growth of e-commerce. Despite the importance of understanding the nature of online consumers' concerns for information privacy, this topic has received little attention in the information systems community. To fill the gap in the literature, this article focuses on three distinct, yet closely related, issues. First, drawing on social contract theory, we offer a theoretical framework on the dimensionality of Internet users' information privacy concerns (IUIPC). Second, we attempt to operationalize the multidimensional notion of IUIPC using a second-order construct, and we develop a scale for it. Third, we propose and test a causal model on the relationship between IUIPC and behavioral intention toward releasing personal information at the request of a marketer. We conducted two separate field surveys and collected data from 742 household respondents in one-on-one, face-to-face interviews. The results of this study indicate that the second-order IUIPC factor, which consists of three first-order dimensions--namely, collection, control, and awareness--exhibited desirable psychometric properties in the context of online privacy. In addition, we found that the causal model centering on IUIPC fits the data satisfactorily and explains a large amount of variance in behavioral intention, suggesting that the proposed model will serve as a useful tool for analyzing online consumers' reactions to various privacy threats on the Internet.
••07 Nov 2005
TL;DR: This paper analyzes the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges and evaluates the amount of information they disclose and study their usage of the site's privacy settings.
Abstract: Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences.
••28 Jun 2006
TL;DR: In this paper, a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself.
Abstract: Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network's members and non-members; we analyze the impact of privacy concerns on members' behavior; we compare members' stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual's privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members' misconceptions about the online community's actual size and composition, and about the visibility of members' profiles.
Trending Questions (10)