scispace - formally typeset
Search or ask a question

Showing papers by "Wei Chen published in 2010"


Proceedings ArticleDOI
25 Jul 2010
TL;DR: The results from extensive simulations demonstrate that the proposed algorithm is currently the best scalable solution to the influence maximization problem and significantly outperforms all other scalable heuristics to as much as 100%--260% increase in influence spread.
Abstract: Influence maximization, defined by Kempe, Kleinberg, and Tardos (2003), is the problem of finding a small set of seed nodes in a social network that maximizes the spread of influence under certain influence cascade models. The scalability of influence maximization is a key factor for enabling prevalent viral marketing in large-scale online social networks. Prior solutions, such as the greedy algorithm of Kempe et al. (2003) and its improvements are slow and not scalable, while other heuristic algorithms do not provide consistently good performance on influence spreads. In this paper, we design a new heuristic algorithm that is easily scalable to millions of nodes and edges in our experiments. Our algorithm has a simple tunable parameter for users to control the balance between the running time and the influence spread of the algorithm. Our results from extensive simulations on several real-world and synthetic networks demonstrate that our algorithm is currently the best scalable solution to the influence maximization problem: (a) our algorithm scales beyond million-sized graphs where the greedy algorithm becomes infeasible, and (b) in all size ranges, our algorithm performs consistently well in influence spread --- it is always among the best algorithms, and in most cases it significantly outperforms all other scalable heuristics to as much as 100%--260% increase in influence spread.

1,709 citations


Proceedings ArticleDOI
13 Dec 2010
TL;DR: This paper proposes the first scalable influence maximization algorithm tailored for the linear threshold model, which is scalable to networks with millions of nodes and edges, is orders of magnitude faster than the greedy approximation algorithm proposed by Kempe et al. and its optimized versions, and performs consistently among the best algorithms.
Abstract: Influence maximization is the problem of finding a small set of most influential nodes in a social network so that their aggregated influence in the network is maximized. In this paper, we study influence maximization in the linear threshold model, one of the important models formalizing the behavior of influence propagation in social networks. We first show that computing exact influence in general networks in the linear threshold model is #P-hard, which closes an open problem left in the seminal work on influence maximization by Kempe, Kleinberg, and Tardos, 2003. As a contrast, we show that computing influence in directed a cyclic graphs (DAGs) can be done in time linear to the size of the graphs. Based on the fast computation in DAGs, we propose the first scalable influence maximization algorithm tailored for the linear threshold model. We conduct extensive simulations to show that our algorithm is scalable to networks with millions of nodes and edges, is orders of magnitude faster than the greedy approximation algorithm proposed by Kempe et al. and its optimized versions, and performs consistently among the best algorithms while other heuristic algorithms not design specifically for the linear threshold model have unstable performances on different real-world networks.

918 citations


Journal ArticleDOI
01 Sep 2010
TL;DR: The first time the community detection problem is addressed by a game-theoretic framework that considers community formation as the result of individual agents’ rational behaviors, and the algorithm is effective in identifying overlapping communities.
Abstract: In this paper, we introduce a game-theoretic framework to address the community detection problem based on the structures of social networks. We formulate the dynamics of community formation as a strategic game called community formation game: Given an underlying social graph, we assume that each node is a selfish agent who selects communities to join or leave based on her own utility measurement. A community structure can be interpreted as an equilibrium of this game. We formulate the agents' utility by the combination of a gain function and a loss function. We allow each agent to select multiple communities, which naturally captures the concept of "overlapping communities". We propose a gain function based on the modularity concept introduced by Newman (Proc Natl Acad Sci 103(23):8577---8582, 2006), and a simple loss function that reflects the intrinsic costs incurred when people join the communities. We conduct extensive experiments under this framework, and our results show that our algorithm is effective in identifying overlapping communities, and are often better then other algorithms we evaluated especially when many people belong to multiple communities. To the best of our knowledge, this is the first time the community detection problem is addressed by a game-theoretic framework that considers community formation as the result of individual agents' rational behaviors.

194 citations


Patent
Wei Chen1, Yajun Wang1, Siyu Yang1, Chi Wang1
28 May 2010
TL;DR: In this paper, a social network, users, and interactions of users may be modeled by graphs, which may be analyzed to determine influential users, based on the assigned influence values.
Abstract: Social networks have become platforms to disseminate and market information and ideas. A social network, users, and interactions of users may be modeled by graphs, which may be analyzed to determine influential users. In one example, nodes within a graph may be concurrently grouped into node groupings. Influence values corresponding to node counts within node groupings may be assigned to nodes within node groupings. Influential nodes may be determined based upon the assigned influence values. In another example, degrees of nodes (e.g., an edge count of a node) may be used to determine influential nodes within the graph. Upon selecting a node, degrees of neighboring nodes of the selecting node may be discounted because the node was selected. In another example, trees corresponding to a current node and (e.g., maximum) influential paths from other nodes to the current node may be constructed and evaluated to determine a group of nodes.

66 citations


Journal ArticleDOI
Jing Wei1, Wei Chen1, W. Wu1, Y.N. Pan1, D.M. Gao1, S.T. Wu1, Y. M. Wu1 
TL;DR: The Experimental Advanced Superconducting Tokamak (EAST) as discussed by the authors is an experimental advanced superconducting tokamak for steady-state operation and has achieved good experimental results.
Abstract: The EAST is an Experimental Advanced Superconducting Tokamak. The mission of the EAST Project is to bring out scientific issues on the continuous nonburning plasma scenario of steady-state operation and engineering issues on establishing the basis of technology for superconducting tokamak. Superconducting magnets were chosen for all poloidal field (PF) and toroidal field (TF) systems since the engineering mission is to establish the technology basis of full superconducting Tokamak for future fusion reactors. The superconducting magnets of EAST consist of sixteen TF coils and fourteen PF coils (seven coil-pairs). To obtain the good performance of the superconducting magnets, all TF magnets and most PF magnets have been tested before assembly. The assembly of the main device was completed in the end of 2005 and at the beginning of 2006, we made successfully the first engineering commissioning of the EAST system. Up to now the EAST device has been used in 4 operation campaigns and has achieved good experimental results.

21 citations


Proceedings Article
06 Dec 2010
TL;DR: This paper decomposes the expected risk according to the two layers, and makes use of the new concept of two-layer Rademacher average, which is quite intuitive and are in accordance with previous empirical studies on the performances of ranking algorithms.
Abstract: This paper is concerned with the generalization analysis on learning to rank for information retrieval (IR). In IR, data are hierarchically organized, i.e., consisting of queries and documents. Previous generalization analysis for ranking, however, has not fully considered this structure, and cannot explain how the simultaneous change of query number and document number in the training data will affect the performance of the learned ranking model. In this paper, we propose performing generalization analysis under the assumption of two-layer sampling, i.e., the i.i.d. sampling of queries and the conditional i.i.d sampling of documents per query. Such a sampling can better describe the generation mechanism of real data, and the corresponding generalization analysis can better explain the real behaviors of learning to rank algorithms. However, it is challenging to perform such analysis, because the documents associated with different queries are not identically distributed, and the documents associated with the same query become no longer independent after represented by features extracted from query-document matching. To tackle the challenge, we decompose the expected risk according to the two layers, and make use of the new concept of two-layer Rademacher average. The generalization bounds we obtained are quite intuitive and are in accordance with previous empirical studies on the performances of ranking algorithms.

13 citations


01 Jan 2010
TL;DR: Wang et al. as mentioned in this paper proposed the use of sidebars, which display forum threads to users, as a mechanism to maximize user in∞uence and boost participation.
Abstract: In online discussion forums, users are more motivated to take part in discussions when observing other users’ participation | the efiect of social in∞uence among forum users. In this paper, we study how to utilize social in∞uence for increasing user participation in online forums. To do so, we propose the use of sidebars, which display forum threads to users, as a mechanism to maximize user in∞uence and boost participation. We formally deflne the participation maximization problem with the sidebar mechanism, based on the social in∞uence network. We show that it is a special instance of the social welfare maximization problem with submodular utility functions and it is NP-hard. However, generic approximation algorithms for social welfare maximization is too slow to be feasible for real-world forums. Thus we design a heuristic algorithm, named Thread Allocation Based on In∞uence (TABI), to tackle the problem. Through extensive experiments using a dataset from a real-world online forum, we demonstrate that TABI consistently outperforms all other algorithms, including a personalized recommendation algorithm, in increasing forum participation. The results of this work could facilitate other related studies such as designs for recommendation systems. The problem of participation maximization based on in∞uence also opens a new direction in the study of social in∞uence. Moreover, the proposed techniques can be applied to other social media, e.g., to maximize overall attention for advertisement in Facebook.

2 citations


Journal ArticleDOI
TL;DR: In this paper, a medium-sized superconducting magnet wound using NbTi wires was designed, fabricated, and tested for storing liquid helium in the zero evaporation state, and the results of the experiment show that the performance of the magnet reaches the requirements.
Abstract: A medium-sized superconducting magnet wound using NbTi wires was designed, fabricated, and tested. It has been developed for a superconducting magnetic energy storage system. The solenoid magnet has a clear bore of 250 mm, an outer diameter of 380 mm, and a height of 589 mm. The magnet consists of one main coil and two compensating coils and is dry wound using rectangular superconducting wires with dimensions of 1.28 mm × 0.83 mm. Grooves and reinforced bulks are designed on the supporting structure to improve the cooling ability and the mechanical strength. A cryogenic system has been designed to keep the liquid helium in the zero evaporation state. The magnet has been tested in a compact cryostat, and it generates a central magnetic field of 4 T at the designed operating current of 126 A. The results of the experiment show that the performance of the magnet reaches the requirements. The details of the design, fabrication, and test of the superconducting magnet are presented in this paper.

2 citations


Journal ArticleDOI
TL;DR: In this paper, a model coil for the 40-T hybrid magnet was built at the Chinese High Magnetic Field Laboratory, Chinese Academy of Sciences, and the /restack-rod-process (RRP) strands adopted for the model coil were heat-treated according to the heat treatment (HT) schedule recommended by the manufacturer, which is Oxford Superconducting Technology.
Abstract: A model coil for the 40-T hybrid magnet is being built at the Chinese High Magnetic Field Laboratory, Chinese Academy of Sciences. /restack-rod-process (RRP) strands adopted for the model coil were heat-treated according to the heat treatment (HT) schedule recommended by the manufacturer, which is Oxford Superconducting Technology. The microstructure of the reacted strand was analyzed. Measurements of the critical current, the residual resistivity ratio and the hysteresis losses were also carried out. The results of critical-characteristic measurements confirmed that the performance of the heat-treated wire was good. All characteristics indicated that the HT of the /RRP strands was successfully completed.

1 citations


Posted Content
TL;DR: This paper studies the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously, and develops a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative.
Abstract: In revenue maximization of selling a digital product in a social network, the utility of an agent is often considered to have two parts: a private valuation, and linearly additive influences from other agents. We study the incomplete information case where agents know a common distribution about others' private valuations, and make decisions simultaneously. The "rational behavior" of agents in this case is captured by the well-known Bayesian Nash equilibrium. Two challenging questions arise: how to compute an equilibrium and how to optimize a pricing strategy accordingly to maximize the revenue assuming agents follow the equilibrium? In this paper, we mainly focus on the natural model where the private valuation of each agent is sampled from a uniform distribution, which turns out to be already challenging. Our main result is a polynomial-time algorithm that can exactly compute the equilibrium and the optimal price, when pairwise influences are non-negative. If negative influences are allowed, computing any equilibrium even approximately is PPAD-hard. Our algorithm can also be used to design an FPTAS for optimizing discriminative price profile.

1 citations