scispace - formally typeset
Search or ask a question
Author

Tian Wang

Bio: Tian Wang is an academic researcher from Huaqiao University. The author has contributed to research in topics: Wireless sensor network & Cloud computing. The author has an hindex of 46, co-authored 285 publications receiving 7163 citations. Previous affiliations of Tian Wang include City University of Macau & Central South University.


Papers
More filters
Proceedings ArticleDOI
26 May 2008
TL;DR: This work proposes a rendezvous-based data collection approach in which a subset of nodes serve as the rendezvous points that buffer and aggregate data originated from sources and transfer to the base station when it arrives.
Abstract: Recent research shows that significant energy saving can be achieved in wireless sensor networks with a mobile base station that collects data from sensor nodes via short-range communications. However, a major performance bottleneck of such WSNs is the significantly increased latency in data collection due to the low movement speed of mobile base stations. To address this issue, we propose a rendezvous-based data collection approach in which a subset of nodes serve as the rendezvous points that buffer and aggregate data originated from sources and transfer to the base station when it arrives. This approach combines the advantages of controlled mobility and in-network data caching and can achieve a desirable balance between network energy saving and data collection delay. We propose two efficient rendezvous design algorithms with provable performance bounds for mobile base stations with variable and fixed tracks, respectively. The effectiveness of our approach is validated through both theoretical analysis and extensive simulations.

320 citations

Journal Article
TL;DR: The detailed cloud computing service system based on big data, which provides high performance solutions for large-scale data storage, processing and analysis, is introduced.
Abstract: As one of the main development directions in the information field, big data technology can be applied for data mining, data analysis and data sharing in the massive data, and it created huge economic benefits by using the potential value of data. Meanwhile, it can provide decision-making strategies for social and economic development. Big data service architecture is a new service economic model that takes data as a resource, and it loads and extracts the data collected from different data sources. This service architecture provides various customized data processing methods, data analysis and visualization services for service consumers. This paper first briefly introduces the general big data service architecture and the technical processing framework, which covered data collection and storage. Next, we discuss big data processing and analysis according to different service requirements, which can present valuable data for service consumers. Then, we introduce the detailed cloud computing service system based on big data, which provides high performance solutions for large-scale data storage, processing and analysis. Finally, we summarize some big data application scenarios over various fields.

218 citations

Journal ArticleDOI
TL;DR: A novel architecture that integrates a trust evaluation mechanism and service template with a balance dynamics based on cloud and edge computing is proposed to overcome problems of security and efficiency of IoT-Cloud systems.
Abstract: The Internet of Things (IoT)-Cloud combines the IoT and cloud computing, which not only enhances the IoT’s capability but also expands the scope of its applications. However, it exhibits significant security and efficiency problems that must be solved. Internal attacks account for a large fraction of the associated security problems, however, traditional security strategies are not capable of addressing these attacks effectively. Moreover, as repeated/similar service requirements become greater in number, the efficiency of IoT-Cloud services is seriously affected. In this paper, a novel architecture that integrates a trust evaluation mechanism and service template with a balance dynamics based on cloud and edge computing is proposed to overcome these problems. In this architecture, the edge network and the edge platform are designed in such a way as to reduce resource consumption and ensure the extensibility of trust evaluation mechanism, respectively. To improve the efficiency of IoT-Cloud services, the service parameter template is established in the cloud and the service parsing template is established in the edge platform. Moreover, the edge network can assist the edge platform in establishing service parsing templates based on the trust evaluation mechanism and meet special service requirements. The experimental results illustrate that this edge-based architecture can improve both the security and efficiency of IoT-Cloud systems.

208 citations

Journal ArticleDOI
TL;DR: The design of the rendezvous-based data collection protocol that facilitates reliable data transfers from RPs to MEs in presence of significant unexpected delays in ME movement and network communication is designed.
Abstract: Recent research shows that significant energy saving can be achieved in wireless sensor networks by using mobile elements (MEs) capable of carrying data mechanically. However, the low movement speed of MEs hinders their use in data-intensive sensing applications with temporal constraints. To address this issue, we propose a rendezvous-based approach in which a subset of nodes serve as the rendezvous points (RPs) that buffer data originated from sources and transfer to MEs when they arrive. RPs enable MEs to collect a large volume of data at a time without traveling long distances, which can achieve a desirable balance between network energy saving and data collection delay. We develop two rendezvous planning algorithms, RP-CP and RP-UG. RP-CP finds the optimal RPs when MEs move along the data routing tree while RP-UG greedily chooses the RPs with maximum energy saving to travel distance ratios. We design the rendezvous-based data collection protocol that facilitates reliable data transfers from RPs to MEs in presence of significant unexpected delays in ME movement and network communication. Our approach is validated through extensive simulations.

206 citations

Journal ArticleDOI
TL;DR: This work designs a dependable distributed WSN framework for SHM (called DependSHM) and examines its ability to cope with sensor faults and constraints, and presents a distributed automated algorithm to detect such types of faults.
Abstract: As an alternative to current wired-based networks, wireless sensor networks (WSNs) are becoming an increasingly compelling platform for engineering structural health monitoring (SHM) due to relatively low-cost, easy installation, and so forth. However, there is still an unaddressed challenge: the application-specific dependability in terms of sensor fault detection and tolerance. The dependability is also affected by a reduction on the quality of monitoring when mitigating WSN constrains (e.g., limited energy, narrow bandwidth). We address these by designing a dependable distributed WSN framework for SHM (called DependSHM ) and then examining its ability to cope with sensor faults and constraints. We find evidence that faulty sensors can corrupt results of a health event (e.g., damage) in a structural system without being detected. More specifically, we bring attention to an undiscovered yet interesting fact, i.e., the real measured signals introduced by one or more faulty sensors may cause an undamaged location to be identified as damaged (false positive) or a damaged location as undamaged (false negative) diagnosis. This can be caused by faults in sensor bonding, precision degradation, amplification gain, bias, drift, noise, and so forth. In DependSHM , we present a distributed automated algorithm to detect such types of faults, and we offer an online signal reconstruction algorithm to recover from the wrong diagnosis. Through comprehensive simulations and a WSN prototype system implementation, we evaluate the effectiveness of DependSHM .

192 citations


Cited by
More filters
Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations

Journal ArticleDOI
TL;DR: A detailed review of the security-related challenges and sources of threat in the IoT applications is presented and four different technologies, blockchain, fog computing, edge computing, and machine learning, to increase the level of security in IoT are discussed.
Abstract: The Internet of Things (IoT) is the next era of communication. Using the IoT, physical objects can be empowered to create, receive, and exchange data in a seamless manner. Various IoT applications focus on automating different tasks and are trying to empower the inanimate physical objects to act without any human intervention. The existing and upcoming IoT applications are highly promising to increase the level of comfort, efficiency, and automation for the users. To be able to implement such a world in an ever-growing fashion requires high security, privacy, authentication, and recovery from attacks. In this regard, it is imperative to make the required changes in the architecture of the IoT applications for achieving end-to-end secure IoT environments. In this paper, a detailed review of the security-related challenges and sources of threat in the IoT applications is presented. After discussing the security issues, various emerging and existing technologies focused on achieving a high degree of trust in the IoT applications are discussed. Four different technologies, blockchain, fog computing, edge computing, and machine learning, to increase the level of security in IoT are discussed.

800 citations