scispace - formally typeset
Search or ask a question
Institution

Nanjing University of Information Science and Technology

EducationNanjing, China
About: Nanjing University of Information Science and Technology is a education organization based out in Nanjing, China. It is known for research contribution in the topics: Precipitation & Aerosol. The organization has 14129 authors who have published 17985 publications receiving 267578 citations. The organization is also known as: Nan Xin Da.


Papers
More filters
Journal ArticleDOI
TL;DR: Fuzzy sets have been employed for big data processing due to their ability to represent and quantify aspects of uncertainty as discussed by the authors, which can result in informative, intelligent and relevant decision making completed in various areas, such as medical and healthcare, business, management and government.
Abstract: In the era of big data, we are facing with an immense volume and high velocity of data with complex structures. Data can be produced by online and offline transactions, social networks, sensors and through our daily life activities. A proper processing of big data can result in informative, intelligent and relevant decision making completed in various areas, such as medical and healthcare, business, management and government. To handle big data more efficiently, new research paradigm has been engaged but the ways of thinking about big data call for further long-term innovative pursuits. Fuzzy sets have been employed for big data processing due to their abilities to represent and quantify aspects of uncertainty. Several innovative approaches within the framework of Granular Computing have been proposed. To summarize the current contributions and present an outlook of further developments, this overview addresses three aspects: (1) We review the recent studies from two distinct views. The first point of view focuses on what types of fuzzy set techniques have been adopted. It identifies clear trends as to the usage of fuzzy sets in big data processing. Another viewpoint focuses on the explanation of the benefits of fuzzy sets in big data problems. We analyze when and why fuzzy sets work in these problems. (2) We present a critical review of the existing problems and discuss the current challenges of big data, which could be potentially and partially solved in the framework of fuzzy sets. (3) Based on some principles, we infer the possible trends of using fuzzy sets in big data processing. We stress that some more sophisticated augmentations of fuzzy sets and their integrations with other tools could offer a novel promising processing environment.

97 citations

Journal ArticleDOI
TL;DR: The analysis shows that the proposed scheme is efficient in terms of computation and communication costs, suitable for massive user groups, and supports the flexible and rapid growth of residential scales in smart grids.
Abstract: Efficient power management in smart grids requires obtaining power consumption data from each resident. However, data concerning user’s electricity consumption might reveal sensitive information, such as living habits and lifestyles. In order to solve this problem, this paper proposes a privacy-preserving cube-data aggregation scheme for electricity consumption. In our scheme, a data item is described as a multi-dimensional data structure ( $l$ -dimensional), and users form and live in multiple residential areas ( $m$ areas, and at most $n$ users in each area). Based on Horner’s Rule, for each user, we construct a user-level polynomial to store dimensional values in a single data space by using the first Horner parameter. After embedding the second Horner parameter into the polynomial, the polynomial is hidden by using Paillier cryptosystem. By aggregating data from $m$ areas, we hide the area-level polynomial into the final output. Moreover, we propose a batch verification scheme in multi-dimensional data to reduce authentication cost. Finally, our analysis shows that the proposed scheme is efficient in terms of computation and communication costs, suitable for massive user groups, and supports the flexible and rapid growth of residential scales in smart grids.

97 citations

Journal ArticleDOI
TL;DR: The mechanism combing blockchain with regeneration coding is proposed to improve the security and reliability of stored data under edge computing and builds a global blockchain in the cloud service layer and local blockchain is built on the terminals of the Internet of things.
Abstract: Edge computing is an important tool for smart computing, which brings convenience to data processing as well as security problems. In particular, the security of data storage under edge computing has become an obstacle to its widespread use. To solve the problem, the mechanism combing blockchain with regeneration coding is proposed to improve the security and reliability of stored data under edge computing. Our contribution is as follows. 1) According to the three-tier edge computing architecture and data security storage requirements, we proposed hybrid storage architecture and model specifically adapted to edge computing. 2) Making full use of the data storage advantages of edge network devices and cloud storage servers, we build a global blockchain in the cloud service layer and local blockchain is built on the terminals of the Internet of things. Moreover, the regeneration coding is utilized to further improve the reliability of data storage in blockchains. 3) Our scheme provides a mechanism for periodically validating hash values of data to ensure the integrity of data stored in global blockchain.

96 citations

Journal ArticleDOI
TL;DR: In this article, the authors examined the influence of southwesterly winds associated with intraseasonal oscillations and monsoon surges on moisture supply and the resulting influence on the slow movement and asymmetric precipitation structure of the typhoon.
Abstract: Typhoon Morakot made landfall on Taiwan with a record rainfall of 3031.5 mm during 6–13 August 2009. While previous studies have emphasized the influence of southwesterly winds associated with intraseasonal oscillations and monsoon surges on moisture supply, the interaction between Morakot and low-frequency monsoon flows and the resulting influence on the slow movement and asymmetric precipitation structure of the typhoon were examined observationally.Embedded in multi-time-scale monsoonal flows, Morakot generally moved westward prior to its landfall on Taiwan and underwent a coalescence process first with a cyclonic gyre on the quasi-biweekly oscillation time scale and then with a cyclonic gyre on the Madden–Julian oscillation time scale. The coalescence enhanced the synoptic-scale southwesterly winds of Morakot and thus decreased its westward movement and turned the track northward, leading to an unusually long residence time in the vicinity of Taiwan. The resulting slow movement and collocation...

96 citations

Journal ArticleDOI
TL;DR: A dynamic resource provisioning method (DRPM) with fault tolerance for the data-intensive meteorological workflows is proposed in this article and the nondominated sorting genetic algorithm II (NSGA-II) is employed to minimize the makespan and improve the load balance.
Abstract: Cloud computing is a formidable paradigm to provide resources for handling the services from Industrial Internet of Things (IIoT), such as meteorological industry. Generally, the meteorological services, with complex interdependent logics, are modeled as workflows. When any of the computing nodes for hosting the meteorological workflows fail, all sorts of consequences (e.g., data loss, makespan enlargement, performance degradation, etc.) could arise. Thus recovering the failed tasks as well as optimizing the makespan and the load balance of the computing nodes is still a critical challenge. To address this challenge, a dynamic resource provisioning method (DRPM) with fault tolerance for the data-intensive meteorological workflows is proposed in this article. Technically, the Virtual Layer 2 (VL2) network topology is exploited to build meteorological cloud infrastructure. Then, the nondominated sorting genetic algorithm II (NSGA-II) is employed to minimize the makespan and improve the load balance. Finally, comprehensive experimental analysis of DRPM are proceeded.

96 citations


Authors

Showing all 14448 results

NameH-indexPapersCitations
Ashok Kumar1515654164086
Lei Zhang135224099365
Bin Wang126222674364
Shuicheng Yan12381066192
Zeshui Xu11375248543
Xiaoming Li113193272445
Qiang Yang112111771540
Yan Zhang107241057758
Fei Wang107182453587
Yongfa Zhu10535533765
James C. McWilliams10453547577
Zhi-Hua Zhou10262652850
Tao Li102248360947
Lei Liu98204151163
Jian Feng Ma9730532310
Network Information
Related Institutions (5)
Chinese Academy of Sciences
634.8K papers, 14.8M citations

90% related

University of Science and Technology of China
101K papers, 2.4M citations

88% related

City University of Hong Kong
60.1K papers, 1.7M citations

88% related

Harbin Institute of Technology
109.2K papers, 1.6M citations

88% related

Nanjing University
105.5K papers, 2.2M citations

87% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023173
2022552
20213,001
20202,492
20192,221
20181,822