scispace - formally typeset
Search or ask a question
Author

Yuling Chen

Bio: Yuling Chen is an academic researcher from Guizhou University. The author has contributed to research in topics: Computer science & Computer security. The author has an hindex of 5, co-authored 23 publications receiving 499 citations. Previous affiliations of Yuling Chen include Guilin University of Electronic Technology.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method.
Abstract: With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.

676 citations

Journal ArticleDOI
TL;DR: This paper presents the definition of post-quantum blockchain (PQB) and proposes a secure cryptocurrency scheme based on PQB, which can resist quantum computing attacks and its signature satisfies correctness and one-more unforgeability under the lattice SIS assumption.
Abstract: Nowadays, blockchain has become one of the most cutting-edge technologies, which has been widely concerned and researched. However, the quantum computing attack seriously threatens the security of blockchain, and related research is still less. Targeting at this issue, in this paper, we present the definition of post-quantum blockchain (PQB) and propose a secure cryptocurrency scheme based on PQB, which can resist quantum computing attacks. First, we propose a signature scheme based on lattice problem. We use lattice basis delegation algorithm to generate secret keys with selecting a random value, and sign message by preimage sampling algorithm. In addition, we design the first-signature and last-signature in our scheme, which are defined as double-signature. It is used to reduce the correlation between the message and the signature. Second, by combining the proposed signature scheme with blockchain, we construct the PQB and propose this cryptocurrency scheme. Its security can be reduced to the lattice short integer solution (SIS) problem. At last, through our analysis, the proposed cryptocurrency scheme is able to resist the quantum computing attack and its signature satisfies correctness and one-more unforgeability under the lattice SIS assumption. Furthermore, compared with previous signature schemes, the sizes of signature and secret keys are relatively shorter than that of others, which can decrease the computational complexity. These make our cryptocurrency scheme more secure and efficient.

116 citations

Journal ArticleDOI
TL;DR: A new lattice-based signature scheme has been proposed, which can be used to secure the blockchain network over existing classical channels and can be proved secure in random oracle model, and it is also more efficient than similar literatures.
Abstract: Blockchain technology has gained significant prominence in recent years due to its public, distributed, and decentration characteristics, which was widely applied in all walks of life requiring distributed trustless consensus. However, the most cryptographic protocols used in the current blockchain networks are susceptible to the quantum attack with rapid development of a sufficiently large quantum computer. In this paper, we first give an overview of the vulnerabilities of the modern blockchain networks to a quantum adversary and some potential post-quantum mitigation methods. Then, a new lattice-based signature scheme has been proposed, which can be used to secure the blockchain network over existing classical channels. Meanwhile, the public and private keys are generated by the Bonsai Trees technology with RandBasis algorithm from the root keys, which not only ensure the randomness, but also construct the lightweight nondeterministic wallets. Then, the proposed scheme can be proved secure in random oracle model, and it is also more efficient than similar literatures. In addition, we also give the detailed description of the post-quantum blockchain transaction. Furthermore, this work can help to enrich the research on the future post-quantum blockchain (PQB).

62 citations

Journal ArticleDOI
TL;DR: A scheme of data integrity verification based on a short signature algorithm (ZSS signature), which supports privacy protection and public auditing by introducing a trusted third party (TPA) and is effectively reduced by reducing hash function overhead in the signature process.
Abstract: The Internet of Things (IoT) is also known as the Internet of everything. As an important part of the new generation of intelligent information technology, the IoT has attracted the attention both of researchers and engineers all over the world. Considering the limited capacity of smart products, the IoT mainly uses cloud computing to expand computing and storage resources. The massive data collected by the sensor are stored in the cloud storage server, also the cloud vulnerability will directly threaten the security and reliability of the IoT. In order to ensure data integrity and availability in the cloud and IoT storage system, users need to verify the integrity of remote data. However, the existing remote data integrity verification schemes are mostly based on the RSA and BLS signature mechanisms. The RSA-based scheme has too much computational overhead. The BLS signature-based scheme needs to adopt a specific hash function, and the batch signature efficiency in the big data environment is low. Therefore, for the computational overhead and signature efficiency issues of these two signature mechanisms, we propose a scheme of data integrity verification based on a short signature algorithm (ZSS signature), which supports privacy protection and public auditing by introducing a trusted third party (TPA). The computational overhead is effectively reduced by reducing hash function overhead in the signature process. Under the assumption of CDH difficult problem, it can resist adaptive chosen-message attacks. The analysis shows that the scheme has a higher efficiency and safety.

56 citations

Journal ArticleDOI
TL;DR: In this article, the PSSPR routing is proposed as a visable approach to address SLP issues, which uses the coordinates of the center node V to divide sector-domain, which act as important role in generating a new phantom nodes.
Abstract: Source location privacy (SLP) protection is an emerging research topic in wireless sensor networks (WSNs). Because the source location represents the valuable information of the target being monitored and tracked, it is of great practical significance to achieve a high degree of privacy of the source location. Although many studies based on phantom nodes have alleviates the protection of source location privacy to some extent. It is urgent to solve the problems such as complicate the ac path between nodes, improve the centralized distribution of Phantom nodes near the source nodes and reduce the network communication overhead. In this paper, PSSPR routing is proposed as a visable approach to address SLP issues. We use the coordinates of the center node V to divide sector-domain, which act as important role in generating a new phantom nodes. The phantom nodes perform specified routing policies to ensure that they can choose various locations. In addition, the directed random route can ensure that data packets avoid the visible range when they move to the sink node hop by hop. Thus, the source location is protected. Theoretical analysis and simulation experiments show that this protocol achieves higher security of source node location with less communication overhead.

43 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 2001
TL;DR: This chapter discusses Decision-Theoretic Foundations, Game Theory, Rationality, and Intelligence, and the Decision-Analytic Approach to Games, which aims to clarify the role of rationality in decision-making.
Abstract: Preface 1. Decision-Theoretic Foundations 1.1 Game Theory, Rationality, and Intelligence 1.2 Basic Concepts of Decision Theory 1.3 Axioms 1.4 The Expected-Utility Maximization Theorem 1.5 Equivalent Representations 1.6 Bayesian Conditional-Probability Systems 1.7 Limitations of the Bayesian Model 1.8 Domination 1.9 Proofs of the Domination Theorems Exercises 2. Basic Models 2.1 Games in Extensive Form 2.2 Strategic Form and the Normal Representation 2.3 Equivalence of Strategic-Form Games 2.4 Reduced Normal Representations 2.5 Elimination of Dominated Strategies 2.6 Multiagent Representations 2.7 Common Knowledge 2.8 Bayesian Games 2.9 Modeling Games with Incomplete Information Exercises 3. Equilibria of Strategic-Form Games 3.1 Domination and Ratonalizability 3.2 Nash Equilibrium 3.3 Computing Nash Equilibria 3.4 Significance of Nash Equilibria 3.5 The Focal-Point Effect 3.6 The Decision-Analytic Approach to Games 3.7 Evolution. Resistance. and Risk Dominance 3.8 Two-Person Zero-Sum Games 3.9 Bayesian Equilibria 3.10 Purification of Randomized Strategies in Equilibria 3.11 Auctions 3.12 Proof of Existence of Equilibrium 3.13 Infinite Strategy Sets Exercises 4. Sequential Equilibria of Extensive-Form Games 4.1 Mixed Strategies and Behavioral Strategies 4.2 Equilibria in Behavioral Strategies 4.3 Sequential Rationality at Information States with Positive Probability 4.4 Consistent Beliefs and Sequential Rationality at All Information States 4.5 Computing Sequential Equilibria 4.6 Subgame-Perfect Equilibria 4.7 Games with Perfect Information 4.8 Adding Chance Events with Small Probability 4.9 Forward Induction 4.10 Voting and Binary Agendas 4.11 Technical Proofs Exercises 5. Refinements of Equilibrium in Strategic Form 5.1 Introduction 5.2 Perfect Equilibria 5.3 Existence of Perfect and Sequential Equilibria 5.4 Proper Equilibria 5.5 Persistent Equilibria 5.6 Stable Sets 01 Equilibria 5.7 Generic Properties 5.8 Conclusions Exercises 6. Games with Communication 6.1 Contracts and Correlated Strategies 6.2 Correlated Equilibria 6.3 Bayesian Games with Communication 6.4 Bayesian Collective-Choice Problems and Bayesian Bargaining Problems 6.5 Trading Problems with Linear Utility 6.6 General Participation Constraints for Bayesian Games with Contracts 6.7 Sender-Receiver Games 6.8 Acceptable and Predominant Correlated Equilibria 6.9 Communication in Extensive-Form and Multistage Games Exercises Bibliographic Note 7. Repeated Games 7.1 The Repeated Prisoners Dilemma 7.2 A General Model of Repeated Garnet 7.3 Stationary Equilibria of Repeated Games with Complete State Information and Discounting 7.4 Repeated Games with Standard Information: Examples 7.5 General Feasibility Theorems for Standard Repeated Games 7.6 Finitely Repeated Games and the Role of Initial Doubt 7.7 Imperfect Observability of Moves 7.8 Repeated Wines in Large Decentralized Groups 7.9 Repeated Games with Incomplete Information 7.10 Continuous Time 7.11 Evolutionary Simulation of Repeated Games Exercises 8. Bargaining and Cooperation in Two-Person Games 8.1 Noncooperative Foundations of Cooperative Game Theory 8.2 Two-Person Bargaining Problems and the Nash Bargaining Solution 8.3 Interpersonal Comparisons of Weighted Utility 8.4 Transferable Utility 8.5 Rational Threats 8.6 Other Bargaining Solutions 8.7 An Alternating-Offer Bargaining Game 8.8 An Alternating-Offer Game with Incomplete Information 8.9 A Discrete Alternating-Offer Game 8.10 Renegotiation Exercises 9. Coalitions in Cooperative Games 9.1 Introduction to Coalitional Analysis 9.2 Characteristic Functions with Transferable Utility 9.3 The Core 9.4 The Shapkey Value 9.5 Values with Cooperation Structures 9.6 Other Solution Concepts 9.7 Colational Games with Nontransferable Utility 9.8 Cores without Transferable Utility 9.9 Values without Transferable Utility Exercises Bibliographic Note 10. Cooperation under Uncertainty 10.1 Introduction 10.2 Concepts of Efficiency 10.3 An Example 10.4 Ex Post Inefficiency and Subsequent Oilers 10.5 Computing Incentive-Efficient Mechanisms 10.6 Inscrutability and Durability 10.7 Mechanism Selection by an Informed Principal 10.8 Neutral Bargaining Solutions 10.9 Dynamic Matching Processes with Incomplete Information Exercises Bibliography Index

3,569 citations

Journal ArticleDOI
01 Nov 2018-Heliyon
TL;DR: The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems and proposed feedforwardand feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance.

1,471 citations

Journal ArticleDOI
TL;DR: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking, and presents applications of DRL for traffic routing, resource sharing, and data collection.
Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.

1,153 citations

Journal ArticleDOI
TL;DR: A highly scalable and hybrid DNNs framework called scale-hybrid-IDS-AlertNet is proposed which can be used in real-time to effectively monitor the network traffic and host-level events to proactively alert possible cyberattacks.
Abstract: Machine learning techniques are being widely used to develop an intrusion detection system (IDS) for detecting and classifying cyberattacks at the network-level and the host-level in a timely and automatic manner. However, many challenges arise since malicious attacks are continually changing and are occurring in very large volumes requiring a scalable solution. There are different malware datasets available publicly for further research by cyber security community. However, no existing study has shown the detailed analysis of the performance of various machine learning algorithms on various publicly available datasets. Due to the dynamic nature of malware with continuously changing attacking methods, the malware datasets available publicly are to be updated systematically and benchmarked. In this paper, a deep neural network (DNN), a type of deep learning model, is explored to develop a flexible and effective IDS to detect and classify unforeseen and unpredictable cyberattacks. The continuous change in network behavior and rapid evolution of attacks makes it necessary to evaluate various datasets which are generated over the years through static and dynamic approaches. This type of study facilitates to identify the best algorithm which can effectively work in detecting future cyberattacks. A comprehensive evaluation of experiments of DNNs and other classical machine learning classifiers are shown on various publicly available benchmark malware datasets. The optimal network parameters and network topologies for DNNs are chosen through the following hyperparameter selection methods with KDDCup 99 dataset. All the experiments of DNNs are run till 1,000 epochs with the learning rate varying in the range [0.01-0.5]. The DNN model which performed well on KDDCup 99 is applied on other datasets, such as NSL-KDD, UNSW-NB15, Kyoto, WSN-DS, and CICIDS 2017, to conduct the benchmark. Our DNN model learns the abstract and high-dimensional feature representation of the IDS data by passing them into many hidden layers. Through a rigorous experimental testing, it is confirmed that DNNs perform well in comparison with the classical machine learning classifiers. Finally, we propose a highly scalable and hybrid DNNs framework called scale-hybrid-IDS-AlertNet which can be used in real-time to effectively monitor the network traffic and host-level events to proactively alert possible cyberattacks.

847 citations