scispace - formally typeset
Search or ask a question
Author

My T. Thai

Bio: My T. Thai is an academic researcher from University of Florida. The author has contributed to research in topics: Approximation algorithm & Computer science. The author has an hindex of 42, co-authored 252 publications receiving 7084 citations. Previous affiliations of My T. Thai include Kyung Hee University & University of Arkansas.


Papers
More filters
Proceedings ArticleDOI
13 Mar 2005
TL;DR: An efficient method to extend the sensor network life time by organizing the sensors into a maximal number of set covers that are activated successively, and designing two heuristics that efficiently compute the sets, using linear programming and a greedy approach are proposed.
Abstract: A critical aspect of applications with wireless sensor networks is network lifetime. Power-constrained wireless sensor networks are usable as long as they can communicate sensed data to a processing node. Sensing and communications consume energy, therefore judicious power management and sensor scheduling can effectively extend network lifetime. To cover a set of targets with known locations when ground access in the remote area is prohibited, one solution is to deploy the sensors remotely, from an aircraft. The lack of precise sensor placement is compensated by a large sensor population deployed in the drop zone, that would improve the probability of target coverage. The data collected from the sensors is sent to a central node (e.g. cluster head) for processing. In this paper we propose un efficient method to extend the sensor network life time by organizing the sensors into a maximal number of set covers that are activated successively. Only the sensors from the current active set are responsible for monitoring all targets and for transmitting the collected data, while all other nodes are in a low-energy sleep mode. By allowing sensors to participate in multiple sets, our problem formulation increases the network lifetime compared with related work [M. Cardei et al], that has the additional requirements of sensor sets being disjoint and operating equal time intervals. In this paper we model the solution as the maximum set covers problem and design two heuristics that efficiently compute the sets, using linear programming and a greedy approach. Simulation results are presented to verify our approaches.

1,046 citations

Posted Content
TL;DR: SSA and D-SSA as mentioned in this paper are two sampling frameworks for IM-based viral marketing problems, which are up to 1200 times faster than the SIGMOD'15 best method, IMM, while providing the same $(1-1/e-\epsilon) approximation guarantee.
Abstract: Influence Maximization (IM), that seeks a small set of key users who spread the influence widely into the network, is a core problem in multiple domains. It finds applications in viral marketing, epidemic control, and assessing cascading failures within complex systems. Despite the huge amount of effort, IM in billion-scale networks such as Facebook, Twitter, and World Wide Web has not been satisfactorily solved. Even the state-of-the-art methods such as TIM+ and IMM may take days on those networks. In this paper, we propose SSA and D-SSA, two novel sampling frameworks for IM-based viral marketing problems. SSA and D-SSA are up to 1200 times faster than the SIGMOD'15 best method, IMM, while providing the same $(1-1/e-\epsilon)$ approximation guarantee. Underlying our frameworks is an innovative Stop-and-Stare strategy in which they stop at exponential check points to verify (stare) if there is adequate statistical evidence on the solution quality. Theoretically, we prove that SSA and D-SSA are the first approximation algorithms that use (asymptotically) minimum numbers of samples, meeting strict theoretical thresholds characterized for IM. The absolute superiority of SSA and D-SSA are confirmed through extensive experiments on real network data for IM and another topic-aware viral marketing problem, named TVM. The source code is available at this https URL

272 citations

Journal ArticleDOI
TL;DR: This paper studies the Interdependent Power Network Disruptor (IPND) optimization problem to identify critical nodes in an interdependent power network whose removals maximally destroy its functions due to both malfunction of these nodes and the cascading failures of its interdependent communication network.
Abstract: Power networks and information systems become more and more interdependent to ensure better supports for the functionality as well as improve the economy. However, power networks also tend to be more vulnerable due to the cascading failures from their interdependent information systems, i.e., the failures in the information systems can cause the failures of the coupled portion in power networks. Therefore, the accurate vulnerability assessment of interdependent power networks is of great importance in the presence of unexpected disruptive events or adversarial attacks targeting on critical network nodes. In this paper, we study the Interdependent Power Network Disruptor (IPND) optimization problem to identify critical nodes in an interdependent power network whose removals maximally destroy its functions due to both malfunction of these nodes and the cascading failures of its interdependent communication network. First, we show the IPND problem is NP-hard to be approximated within the factor of (2-e) . Despite its intractability, we propose a greedy framework with novel centrality functions based on the networks' interdependencies, to efficiently solve this problem in a timely manner. An extensive experiment not only illustrates the effectiveness of our approach on networks with different topologies and interdependencies, but also highlights some important observations which help to sharpen the robustness of interdependent networks in the future.

237 citations

Proceedings ArticleDOI
14 Jun 2016
TL;DR: Theoretically, it is proved that SSA and D-SSA are the first approximation algorithms that use (asymptotically) minimum numbers of samples, meeting strict theoretical thresholds characterized for IM.
Abstract: Influence Maximization (IM), that seeks a small set of key users who spread the influence widely into the network, is a core problem in multiple domains. It finds applications in viral marketing, epidemic control, and assessing cascading failures within complex systems. Despite the huge amount of effort, IM in billion-scale networks such as Facebook, Twitter, and World Wide Web has not been satisfactorily solved. Even the state-of-the-art methods such as TIM+ and IMM may take days on those networks. In this paper, we propose SSA and D-SSA, two novel sampling frameworks for IM-based viral marketing problems. SSA and D-SSA are up to 1200 times faster than the SIGMOD'15 best method, IMM, while providing the same (1-1/e-e) approximation guarantee. Underlying our frameworks is an innovative Stop-and-Stare strategy in which they stop at exponential check points to verify (stare) if there is adequate statistical evidence on the solution quality. Theoretically, we prove that SSA and D-SSA are the first approximation algorithms that use (asymptotically) minimum numbers of samples, meeting strict theoretical thresholds characterized for IM. The absolute superiority of SSA and D-SSA are confirmed through extensive experiments on real network data for IM and another topic-aware viral marketing problem, named TVM.

236 citations

Proceedings ArticleDOI
10 Apr 2011
TL;DR: This paper presents Quick Community Adaptation (QCA), an adaptive modularity-based method for identifying and tracing community structure of dynamic online social networks and demonstrates the bright applicability of the algorithm via a realistic application on routing strategies in MANETs.
Abstract: Social networks exhibit a very special property: community structure. Understanding the network community structure is of great advantages. It not only provides helpful information in developing more social-aware strategies for social network problems but also promises a wide range of applications enabled by mobile networking, such as routings in Mobile Ad Hoc Networks (MANETs) and worm containments in cellular networks. Unfortunately, understanding this structure is very challenging, especially in dynamic social networks where social activities and interactions are evolving rapidly. Can we quickly and efficiently identify the network community structure? Can we adaptively update the network structure based on previously known information instead of recomputing from scratch? In this paper, we present Quick Community Adaptation (QCA), an adaptive modularity-based method for identifying and tracing community structure of dynamic online social networks. Our approach has not only the power of quickly and efficiently updating network communities, through a series of changes, by only using the structures identified from previous network snapshots, but also the ability of tracing the evolution of community structure over time. To illustrate the effectiveness of our algorithm, we extensively test QCA on real-world dynamic social networks including ENRON email network, arXiv e-print citation network and Facebook network. Finally, we demonstrate the bright applicability of our algorithm via a realistic application on routing strategies in MANETs. The comparative results reveal that social-aware routing strategies employing QCA as a community detection core outperform current available methods.

226 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work offers a comprehensive review on both structural and dynamical organization of graphs made of diverse relationships (layers) between its constituents, and cover several relevant issues, from a full redefinition of the basic structural measures, to understanding how the multilayer nature of the network affects processes and dynamics.

2,669 citations

Journal ArticleDOI
TL;DR: This paper provides an up-to-date picture of CloudIoT applications in literature, with a focus on their specific research challenges, and identifies open issues and future directions in this field, which it expects to play a leading role in the landscape of the Future Internet.

1,880 citations

Journal ArticleDOI
TL;DR: A framework is proposed for evaluating algorithms' ability to detect overlapping nodes, which helps to assess overdetection and underdetection, and for low overlapping density networks, SLPA, OSLOM, Game, and COPRA offer better performance than the other tested algorithms.
Abstract: This article reviews the state-of-the-art in overlapping community detection algorithms, quality measures, and benchmarks. A thorough comparison of different algorithms (a total of fourteen) is provided. In addition to community-level evaluation, we propose a framework for evaluating algorithms' ability to detect overlapping nodes, which helps to assess overdetection and underdetection. After considering community-level detection performance measured by normalized mutual information, the Omega index, and node-level detection performance measured by F-score, we reached the following conclusions. For low overlapping density networks, SLPA, OSLOM, Game, and COPRA offer better performance than the other tested algorithms. For networks with high overlapping density and high overlapping diversity, both SLPA and Game provide relatively stable performance. However, test results also suggest that the detection in such networks is still not yet fully resolved. A common feature observed by various algorithms in real-world networks is the relatively small fraction of overlapping nodes (typically less than 30p), each of which belongs to only 2 or 3 communities.

1,166 citations

01 Jan 2013

1,098 citations