scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 2009"


Journal ArticleDOI
01 Jul 2009
TL;DR: If decomposing useful graph operations in terms of MapReduce cycles is possible, it provides incentive for seriously considering cloud computing and offers a way to handle a large graph on a single machine that can't hold the entire graph as well as enables streaming graph processing.
Abstract: As the size of graphs for analysis continues to grow, methods of graph processing that scale well have become increasingly important. One way to handle large datasets is to disperse them across an array of networked computers, each of which implements simple sorting and accumulating, or MapReduce, operations. This cloud computing approach offers many attractive features. If decomposing useful graph operations in terms of MapReduce cycles is possible, it provides incentive for seriously considering cloud computing. Moreover, it offers a way to handle a large graph on a single machine that can't hold the entire graph as well as enables streaming graph processing. This article examines this possibility.

441 citations


Journal ArticleDOI
01 Jan 2009
TL;DR: The authors review their approach to reproducible computational research and how it has evolved over time, discussing the arguments for and against working reproducibly.
Abstract: Scientific computation is emerging as absolutely central to the scientific method. Unfortunately, it's error-prone and currently immature-traditional scientific publication is incapable of finding and rooting out errors in scientific computation-which must be recognized as a crisis. An important recent development and a necessary response to the crisis is reproducible computational research in which researchers publish the article along with the full computational environment that produces the results. In this article, the authors review their approach and how it has evolved over time, discussing the arguments for and against working reproducibly.

265 citations


Proceedings ArticleDOI
23 May 2009
TL;DR: The main conclusions are that the knowledge required to develop and use scientific software is primarily acquired from peers and through self-study, rather than from formal education and training and there is no uniform trend of association between rank of importance of software engineering concepts and project/team size.
Abstract: New knowledge in science and engineering relies increasingly on results produced by scientific software. Therefore, knowing how scientists develop and use software in their research is critical to assessing the necessity for improving current development practices and to making decisions about the future allocation of resources. To that end, this paper presents the results of a survey conducted online in October-December 2008 which received almost 2000 responses. Our main conclusions are that (1) the knowledge required to develop and use scientific software is primarily acquired from peers and through self-study, rather than from formal education and training; (2) the number of scientists using supercomputers is small compared to the number using desktop or intermediate computers; (3) most scientists rely primarily on software with a large user base; (4) while many scientists believe that software testing is important, a smaller number believe they have sufficient understanding about testing concepts; and (5) that there is a tendency for scientists to rank standard software engineering concepts higher if they work in large software development projects and teams, but that there is no uniform trend of association between rank of importance of software engineering concepts and project/team size.

241 citations


Journal ArticleDOI
01 Nov 2009
TL;DR: A polished software tool F2PY is provided that can (semi-)automatically build interfaces between the Python and Fortran languages and hence almost completely hide the difficulties from the target user: a research scientist who develops a computer model using a high-performance scripting approach.
Abstract: In this paper we tackle the problem of connecting low-level Fortran programs to high-level Python programs. The difficulties of mixed language programming between Fortran and C are resolved in an almost compiler and platform independent way. We provide a polished software tool F2PY that can (semi-)automatically build interfaces between the Python and Fortran languages and hence almost completely hide the difficulties from the target user: a research scientist who develops a computer model using a high-performance scripting approach.

236 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: FaceCloak, an architecture that protects user privacy on a social networking site by shielding a user's personal information from the site and from other users that were not explicitly authorized by the user, and seamlessly maintains usability of the site's services.
Abstract: Social networking sites, such as MySpace, Facebook and Flickr, are gaining more and more popularity among Internet users. As users are enjoying this new style of networking, privacy concerns are also attracting increasing public attention due to reports about privacy breaches on social networking sites. We propose FaceCloak, an architecture that protects user privacy on a social networking site by shielding a user's personal information from the site and from other users that were not explicitly authorized by the user. At the same time, FaceCloak seamlessly maintains usability of the site's services. FaceCloak achieves these goals by providing fake information to the social networking site and by storing sensitive information in encrypted form on a separate server. We implemented our solution as a Firefox browser extension for the Facebook platform. Our experiments show that our solution successfully conceals a user's personal information, while allowing the user and her friends to explore Facebook pages and services as usual.

210 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: It is found that the strength of a friendship tie is most predictive of whether an individual will vouch for another, and vouches based on weak ties outnumber those between close friends.
Abstract: Reputation mechanisms are essential for online transactions, where the parties have little prior experience with one another. This is especially true when transactions result in offline interactions. There are few situations requiring more trust than letting a stranger sleep in your home, or conversely, staying on someone else’s couch. Couchsurfing.com allows individuals to do just this. The global CouchSurfing network displays a high degree of reciprocal interaction and a large strongly connected component of individuals surfing the globe. This high degree of interaction and reciprocity among participants is enabled by a reputation system that allows individuals to vouch for one another. We find that the strength of a friendship tie is most predictive of whether an individual will vouch for another. However, vouches based on weak ties outnumber those between close friends. We discuss these and other factors that could inform a more robust reputation system.

169 citations


Journal ArticleDOI
01 Jul 2009
TL;DR: Montage as discussed by the authors is a portable software toolkit to construct custom, science-grade mosaics that preserve the astrometry and photometry of astronomical sources, which can be run on both single and multi-processor computers, including clusters and grids.
Abstract: Montage is a portable software toolkit to construct custom, science-grade mosaics that preserve the astrometry and photometry of astronomical sources. The user specifies the dataset, wavelength, sky location, mosaic size, coordinate system, projection, and spatial sampling. Montage supports massive astronomical datasets that may be stored in distributed archives. Montage can be run on both single- and multi-processor computers, including clusters and grids. Standard grid tools are used to access remote data or run Montage on remote computers. This paper describes the architecture, algorithms, performance, and usage of Montage as both a software toolkit and a grid portal.

165 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: A method for churn prediction which combines social influence and player engagement factors has shown to improve prediction accuracy significantly for the dataset as compared to prediction using the conventional diffusion model or the player engagementfactor, thus validating the hypothesis that combination of both these factors could lead to a more accurate churn prediction.
Abstract: Massively Multiplayer Online Role Playing Games(MMORPGs) are computer based games in which players interactwith one another in the virtual world. Worldwide revenuesfor MMORPGs have seen amazing growth in last few years and itis more than a 2 billion dollars industry as per current estimates.Huge amount of revenue potential has attracted several gamingcompanies to launch online role playing games. One of the majorproblems these companies suffer apart from fierce competitionis erosion of their customer base. Churn is a big problem for thegaming companies as churners impact negatively in the wordof-mouth reports for potential and existing customers leading tofurther erosion of user base.We study the problem of player churn in the popularMMORPG EverQuest II. The problem of churn predictionhas been studied extensively in the past in various domainsand social network analysis has recently been applied to theproblem to understand the effects of the strength of social tiesand the structure and dynamics of a social network in churn.In this paper, we propose a churn prediction model based onexamining social influence among players and their personalengagement in the game. We hypothesize that social influence is avector quantity, with components negative influence and positiveinfluence. We propose a modified diffusion model to propagatethe influence vector in the players network which represents thesocial influence on the player from his network. We measure aplayers personal engagement based on his activity patterns anduse it in the modified diffusion model and churn prediction. Ourmethod for churn prediction which combines social influenceand player engagement factors has shown to improve predictionaccuracy significantly for our dataset as compared to predictionusing the conventional diffusion model or the player engagementfactor, thus validating our hypothesis that combination of boththese factors could lead to a more accurate churn prediction.

164 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: This study is the first large-scale quantitative analysis of a real-world commercial LSN service and presents results of data analysis over user profiles, update activities, mobility characteristics, social graphs, and attribute correlations.
Abstract: Location-based Social Networks (LSNs) allow users to see where their friends are, to search location-tagged contentwithin their social graph, and to meet others nearby. The recent availability of open mobile platforms, such as Apple iPhones and Google Android phones, makes LSNs much more accessible to mobile users.To study how users share their location in real world, wecollected traces from a commercial LSN service operated by astartup company. In this paper, we present results of data analysis over user profiles, update activities, mobility characteristics, social graphs, and attribute correlations. To the best of our knowledge, this study is the first large-scale quantitative analysis of a real-world commercial LSN service.

159 citations


Proceedings ArticleDOI
29 Aug 2009
TL;DR: This work proposes and evaluates a machine learning-based approach for ranking comments on the Social Web based on the community's expressed preferences, which can be used to promote high-quality comments and filter out low- quality comments.
Abstract: We study how an online community perceives the relative quality of its own user-contributed content, which has important implications for the successful self-regulation and growth of the Social Web in the presence of increasing spam and a flood of Social Web metadata. We propose and evaluate a machine learning-based approach for ranking comments on the Social Web based on the community's expressed preferences, which can be used to promote high-quality comments and filter out low-quality comments. We study several factors impacting community preference, including the contributor's reputation and community activity level, as well as the complexity and richness of the comment. Through experiments, we find that the proposed approach results in significant improvement in ranking quality versus alternative approaches.

143 citations


Journal ArticleDOI
01 Nov 2009
TL;DR: The authors conducted an ethnographic study of climate scientists and found that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques.
Abstract: Climate scientists build large, complex simulations with little or no software engineering training—and don't readily adopt the latest software engineering tools and techniques. This ethnographic study of climate scientists shows that their culture and practices share many features of agile and open source projects, but with highly customized software validation and verification techniques.

Journal ArticleDOI
01 Jan 2009
TL;DR: The authors point to the success of the reproducible research discipline in increasing the reliability of computational research and reflect on the effort necessary for implementing this discipline in a research group and overcoming possible objections to it.
Abstract: The articles in this special issue provide independent solutions for practical reproducible research systems. The use of Matlab-based tools such as the famous Wavelab and Sparselab packages in promoting reproducible research in computational harmonic analysis has been presented. In particular, the authors point to the success of the reproducible research discipline in increasing the reliability of computational research and reflect on the effort necessary for implementing this discipline in a research group and overcoming possible objections to it. An article also describes a Python interface to the well-known Clawpack package for solving hyperbolic partial differential equations that appear in wave propagation problems. The author argues strongly in favor of reproducible computations and shows an example using a simplified Python interface to Fortran code. An article also represents the field of bioinformatics, which has been a stronghold of reproducible research. It describes the cacher package, which is built on top of the R computing environment. Cacher enables a modular approach to reproducible computations by storing results of intermediate computations in a database. The special issue ends with an article on the legal aspects of reproducible research, including copyright and licensing issues.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A novel set of social network analysis based algorithms for mining the Web, blogs, and online forums to identify trends and find the people launching these new trends to predict long-term trends on the popularity of relevant concepts such as brands, movies, and politicians are introduced.
Abstract: We introduce a novel set of social network analysis based algorithms for mining the Web, blogs, and online forums to identify trends and find the people launching these new trends. These algorithms have been implemented in Condor, a software system for predictive search and analysis of the Web and especially social networks. Algorithms include the temporal computation of network centrality measures, the visualization of social networks as Cybermaps, a semantic process of mining and analyzing large amounts of text based on social network analysis, and sentiment analysis and information filtering methods. The temporal calculation of betweenness of concepts permits to extract and predict long-term trends on the popularity of relevant concepts such as brands, movies, and politicians. We illustrate our approach by qualitatively comparing Web buzz and our Web betweenness for the 2008 US presidential elections, as well as correlating the Web buzz index with share prices.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A survey of various human computation systems which are categorized into initiatory human computation, distributed human computation and social game-based human computation with volunteers, paid engineers and online players is given.
Abstract: Human computation is a technique that makes use of human abilities for computation to solve problems. The human computation problems are the problems those computers are not good at solving but are trivial for humans. In this paper, we give a survey of various human computation systems which are categorized into initiatory human computation, distributed human computation and social game-based human computation with volunteers, paid engineers and online players. For the existing large number of social games, some previous works defined various types of social games, but the recent developed social games cannot be categorized based on the previous works. In this paper, we define the categories and the characteristics of social games which are suitable for all existing ones. Besides, we present a survey on the performance aspects of human computation system. This paper gives a better understanding on human computation system.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: The characteristics of cloud computing are investigated and an efficient privacy preserving keyword search scheme in cloud computing is proposed that allows a service provider to participate in partial decipherment to reduce a client's computational overhead and enables the service providers to search the keywords on encrypted files to protect the user data privacy and the user queries privacy efficiently.
Abstract: A user stores his personal files in a cloud, and retrieves them wherever and whenever he wants. For the sake of protecting the user data privacy and the user queries privacy, a user should store his personal files in an encrypted form in a cloud, and then sends queries in the form of encrypted keywords. However, a simple encryption scheme may not work well when a user wants to retrieve only files containing certain keywords using a thin client. First, the user needs to encrypt and decrypt files frequently, which depletes too much CPU capability and memory power of the client. Second, the service provider couldn't determine which files contain keywords specified by a user if the encryption is not searchable. Therefore, it can only return back all the encrypted files. A thin client generally has limited bandwidth, CPU and memory, and this may not be a feasible solution under the circumstances. In this paper, we investigate the characteristics of cloud computing and propose an efficient privacy preserving keyword search scheme in cloud computing. It allows a service provider to participate in partial decipherment to reduce a client's computational overhead, and enables the service provider to search the keywords on encrypted files to protect the user data privacy and the user queries privacy efficiently. By proof, our scheme is semantically secure.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: Large online social footprints are studied by collecting data on 13,990 active users and an initial investigation of matching profiles using public information in a person’s profile finds that a user with one social network reveals an average of 4.3 personal information fields.
Abstract: We study large online social footprints by collecting data on 13,990 active users. After parsing data from 10 of the 15 most popular social networking sites, we find that a user with one social network reveals an average of 4.3 personal information fields. For users with over 8 social networks, this average increases to 8.25 fields. We also investigate the ease by which an attacker can reconstruct a person’s social network profile. Over 40% of an individual’s social footprint can be reconstructed by using a single pseudonym (assuming the attacker guesses the most popular pseudonym), and an attacker can reconstruct 10% to 35% of an individual’s social footprint by using the person’s name. We also perform an initial investigation of matching profiles using public information in a person’s profile.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: This study shows that, given adequate management, the core virtualization technology has a clear positive effect on availability, but that the effect on confidentiality and integrity is less positive.
Abstract: Server virtualization is a key technology for today's data centers, allowing dedicated hardware to be turned into resources that can be used on demand.However, in spite of its important role, the overall security impact of virtualization is not well understood.To remedy this situation, we have performed a systematic literature review on the security effects of virtualization. Our study shows that, given adequate management, the core virtualization technology has a clear positive effect on availability, but that the effect on confidentiality and integrity is less positive.Virtualized systems tend to lose the properties of location-boundedness, uniqueness and monotonicity.In order to ensure corporate and private data security, we propose to either remove or tightly manage non-essential features such as introspection, rollback and transfer.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A novel attack called automated social engineering is introduced which illustrates how social networking sites can be used for social engineering and takes classical social engineering one step further by automating tasks which formerly were very time-intensive.
Abstract: A growing number of people use social networking sites to foster social relationships among each other. While the advantages of the provided services are obvious, drawbacks on a users' privacy and arising implications are often neglected. In this paper we introduce a novel attack called automated social engineering which illustrates how social networking sites can be used for social engineering. Our approach takes classical social engineering one step further by automating tasks which formerly were very time-intensive. In order to evaluate our proposed attack cycle and our prototypical implementation (ASE bot), we conducted two experiments. Within the first experiment we examine the information gathering capabilities of our bot. The second evaluation of our prototype performs a Turing test. The promising results of the evaluation highlight the possibility to efficiently and effectively perform social engineering attacks by applying automated social engineering bots.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A computational framework to predict synchrony of action in online social media and develops a DBNbased representation that includes an understanding of user context to predict the probability of user actions over a set of time slices into the future.
Abstract: We propose a computational framework to predict synchronyof action in online social media. Synchrony is a temporalsocial network phenomenon in which a large number of usersare observed to mimic a certain action over a period of timewith sustained participation from early users.Understanding social synchrony can be helpful in identifyingsuitable time periods of viral marketing. Our method consistsof two parts – the learning framework and the evolutionframework. In the learning framework, we develop a DBNbased representation that includes an understanding of usercontext to predict the probability of user actions over a set oftime slices into the future. In the evolution framework, weevolve the social network and the user models over a set offuture time slices to predict social synchrony. Extensiveexperiments on a large dataset crawled from the popularsocial media site Digg (comprising ~7M diggs) show thatour model yields low error (15.2+4.3%) in predicting useractions during periods with and without synchrony.Comparison with baseline methods indicates that our methodshows significant improvement in predicting user actions.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: This work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security.
Abstract: Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: This paper identifies a structured and comprehensive set of privacy-related requirements for vehicular communication systems, and analyzes the complex inter-relations among them to enable system designers to better understand privacy issues in vehicular networks and properly address privacy requirements during the system design process.
Abstract: A primary goal of vehicular communication systems is the enhancement of traffic safety by equipping vehicles with wireless communication units to facilitate cooperative awareness. Privacy issues arise from the frequent broadcasting of real-time positioning information. Thus privacy protection becomes a key factor for enabling widespread deployment. At the same time, stakeholders demand accountability due to the safety-critical nature of many applications. Earlier works on privacy requirements for vehicular networks often discussed them as a part of security. Therefore many aspects of privacy requirements have been overlooked. In this paper, we identify a structured and comprehensive set of privacy-related requirements for vehicular communication systems, andanalyze the complex inter-relations among them. Our results enable system designers to better understand privacy issues in vehicular networks and properly address privacy requirements during the system design process. We further show that our requirements set facilitates the comparison and evaluation of different privacy approaches for vehicular communication systems.

Journal ArticleDOI
01 Jan 2009
TL;DR: The authors discuss this configurable system's architecture and focus on its use for Monte Carlo simulations of statistical mechanics, as Janus performs impressively on this class of application.
Abstract: Janus is a modular, massively parallel, and reconfigurable FPGA-based computing system. Each Janus module has one computational core and one host. Janus is tailored to, but not limited to, the needs of a class of hard scientific applications characterized by regular code structure, unconventional data-manipulation requirements, and a few Megabits database. The authors discuss this configurable system's architecture and focus on its use for Monte Carlo simulations of statistical mechanics, as Janus performs impressively on this class of application.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: Ambulation as mentioned in this paper is a mobility monitoring system that uses Android and Nokia N95 mobile phones to automatically detect the user's mobility mode and uploads the collected mobility and location information to a server.
Abstract: An important tool for evaluating the health of patients who suffer from mobility-affecting chronic diseases such as MS, Parkinson’s, and Muscular Dystrophy is assessment of how much they walk. Ambulation is a mobility monitoring system that uses Android and Nokia N95 mobile phones to automatically detect the user’s mobility mode. The user’s only required interaction with the phone is turning it on and keeping it with him/her throughout the day, with the intention that it could be used as his/her everyday mobile phone for voice, data, and other applications, while Ambulation runs in the background. The phone uploads the collected mobility and location information to a server and a secure, intuitive web-based visualization of the data is available to the user and any family, friends or caregivers whom they authorize, allowing them to identify trends in their mobility and measure progress over time and in response to varying treatments.

Journal ArticleDOI
01 Nov 2009
TL;DR: This work discusses how the UFC interface enables implementations of variational form evaluation to be independent of mesh and linear algebra components, and proposes a general interface between problem-specific and general-purpose components of finite element programs.
Abstract: At the heart of any finite element simulation is the assembly of matrices and vectors from discrete variational forms. We propose a general interface between problem-specific and general-purpose components of finite element programs. This interface is called Unified Form-assembly Code (UFC). A wide range of finite element problems is covered, including mixed finite elements and discontinuous Galerkin methods. We discuss how the UFC interface enables implementations of variational form evaluation to be independent of mesh and linear algebra components. UFC does not depend on any external libraries, and is released into the public domain.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A comparative analysis of the behavioral dynamics of rural and urban societies using four years of mobile phone data from all 1.4M subscribers within a small country proves that individuals change their patterns of communication to increase the similarity with their new social environment.
Abstract: We present a comparative analysis of the behavioral dynamics of rural and urban societies using four years of mobile phone data from all 1.4M subscribers within a small country. We use information from communication logs and top up denominations to characterize attributes such as socioeconomic status and region. We show that rural and urban communities differ dramatically not only in terms of personal network topologies, but also in terms of inferred behavioral characteristics such as travel. We confirm the hypothesis for behavioral adaptation, demonstrating that individuals change their patterns of communication to increase the similarity with their new social environment. To our knowledge, this is the first comprehensive comparison between regional groups of this size.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: Data from the massively-multiplayer online role-playing game EverQuest II is used to identify gold farmers and criteria for evaluating gold farming detection techniques are given, and suggestions for future testing and evaluation techniques are provided.
Abstract: Gold farming refers to the illicit practice of gathering and selling virtual goods in online games for real money. Although around one million gold farmers engage in gold farming related activities, to date a systematic study of identifying gold farmers has not been done. In this paper we use data from the massively-multiplayer online role-playing game (MMORPG) EverQuest II to identify gold farmers. We perform an exploratory logistic regression analysis to identify salient descriptive statistics followed by a machine learning binary classification problem to identify a set of features for classification purposes. Given the cost associated with investigating gold farmers, we also give criteria for evaluating gold farming detection techniques, and provide suggestions for future testing and evaluation techniques.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: This work evaluates the influence of traffic onto the consumption of electrical power of four switches found in home and professional environments and finds that for one of the switches the power consumption actually drops for high traffic loads, while for the others the situation is reverse.
Abstract: Precise evaluation of network appliance energy consumption is necessary to accurately model or simulate the power consumption of distributed systems. In this paper we evaluate the influence of traffic onto the consumption of electrical power of four switches found in home and professional environments. First we describe our measurement and data analysis approach, and how our results can be used for estimating the power consumption when knowing the average traffic bandwidth.Then we present the measurement results of two residential switches, and two professional switches. For each type we present regression models and parameters describing their quality. Similar to other works we find that for one of the switches the power consumption actually drops for high traffic loads, while for the others the situation is reverse. Measures justify that during most energy consumption evaluation, network appliance energy cost can be approximated as constant. This work gives information on the possible changes of this cost.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: This paper proposes an improved ID-based remote mutual authentication with key agreement scheme for mobile devices on ECC that not only eliminates the security flaws of YC scheme but also reduces the computational costs between the user and the server.
Abstract: In 2009, Yang and Chang proposed an ID-based remote mutual authentication with key agreement scheme on elliptic curve cryptosystem (ECC). Based upon ID-based concept, Yang and Chang scheme (YC scheme) does not require additional computations for certificate and is not constructed by bilinear-pairings, which is an expensive operation on elliptic curve. In addition, YC scheme not only provides mutual authentication but also supports a session key agreement between the user and the server. Therefore, YC scheme is more efficient and practical than the related works. However, we find YC scheme is vulnerable to an impersonation attack and does not provide perfect forwardsecrecy in spite of efforts to perform mutual authentication and session key agreement between the user and the remote server and reduce the computational costs than the related works. Therefore, this paper proposes an improved ID-based remote mutual authentication with key agreement scheme for mobile devices on ECC. Compared with YC scheme, the proposed scheme is more secure, efficient, and practical for mobile devices because the proposed scheme not only eliminates the security flaws of YC scheme but also reduces the computational costs between the user and the server.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: It is found that those who achieve the most in the game send and receive more communication, while those who perform the most efficiently at the game show no difference in communication behavior from other players.
Abstract: We examine the social behaviors of game experts in Everquest II, a popular massive multiplayer online role-playing game (MMO). We rely on Exponential Random Graph Models (ERGM) to examine the anonymous privacy-protected social networks of 1,457 players over a five-day period. We find that those who achieve the most in the game send and receive more communication, while those who perform the most efficiently at the game show no difference in communication behavior from other players. Both achievement and performance experts tend to communicate with those at similar expertise levels, and higher-level experts are more likely to receive communication from other players.

Proceedings ArticleDOI
29 Aug 2009
TL;DR: A probabilistic notion of edge anonymity, called graph confidence, is proposed, which is general enough to capture the privacy breach made by an adversary who can pinpoint target persons in a graph partition based on any given set of topological features of vertexes.
Abstract: Edges in social network graphs may represent sensitive relationships. In this paper, we consider the problem of edges anonymity in graphs. We propose a probabilistic notion of edge anonymity, called graph confidence, which is general enough to capture the privacy breach made by an adversary who can pinpoint target persons in a graph partition based on any given set of topological features of vertexes. We consider a special type of edge anonymity problem which uses vertex degree to partition a graph. We analyze edge disclosure in real-world social networks and show that although some graphs can preserve vertex anonymity, they may still not preserve edge anonymity. We present three heuristic algorithms that protect edge anonymity using edge swap or edge deletion. Our experimental results, based on three real-world social networks and several utility measures, show that these algorithms can effectively preserve edge anonymity yet obtain anonymous graphs of acceptable utility.