scispace - formally typeset
Search or ask a question

Showing papers in "Cmc-computers Materials & Continua in 2018"


Journal ArticleDOI
Jin Wang1, Chunwei Ju, Yu Gao, Arun Kumar Sangaiah, Gwang-jun Kim 
TL;DR: A novel coverage control algorithm based on Particle Swarm Optimization (PSO) is presented that can effectively improve coverage rate and reduce energy consumption in WSNs.
Abstract: Wireless Sensor Networks (WSNs) are large-scale and high-density networks that typically have coverage area overlap. In addition, a random deployment of sensor nodes cannot fully guarantee coverage of the sensing area, which leads to coverage holes in WSNs. Thus, coverage control plays an important role in WSNs. To alleviate unnecessary energy wastage and improve network performance, we consider both energy efficiency and coverage rate for WSNs. In this paper, we present a novel coverage control algorithm based on Particle Swarm Optimization (PSO). Firstly, the sensor nodes are randomly deployed in a target area and remain static after deployment. Then, the whole network is partitioned into grids, and we calculate each grid’s coverage rate and energy consumption. Finally, each sensor nodes’ sensing radius is adjusted according to the coverage rate and energy consumption of each grid. Simulation results show that our algorithm can effectively improve coverage rate and reduce energy consumption.

244 citations


Journal ArticleDOI
TL;DR: Generative Adversarial Networks (GANs) are extended to the semi-supervised learning to show it is a method can be used to create a more data-efficient classifier.
Abstract: Deep Learning (DL) is such a powerful tool that we have seen tremendous success in areas such as Computer Vision, Speech Recognition, and Natural Language Processing. Since Automated Modulation Classification (AMC) is an important part in Cognitive Radio Networks, we try to explore its potential in solving signal modulation recognition problem. It cannot be overlooked that DL model is a complex model, thus making them prone to over-fitting. DL model requires many training data to combat with over-fitting, but adding high quality labels to training data manually is not always cheap and accessible, especially in real-time system, which may counter unprecedented data in dataset. Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL. In this paper, we extend Generative Adversarial Networks (GANs) to the semi-supervised learning will show it is a method can be used to create a more data-efficient classifier.

146 citations


Journal ArticleDOI
TL;DR: Experimental results show that this approach enhances the security and provides robust embedding of secret data in image, video, voice or text media.
Abstract: The aim of information hiding is to embed the secret information in a normal cover media such as image, video, voice or text, and then transmit the secret data through the transmission of the public information. The secret message should not be damaged when the processing is applied on the cover media. In order to ensure the invisibility of confidential information, complex texture objects should be suitable for embedding information. In this paper, an approach which corresponds multiple steganographic algorithms to complex texture objects was presented for hiding secret data. Firstly, complex texture regions based on objects detection are selected. Secondly, several different steganographic methods were used to hide secret data into the selected area block. Experimental results show that this approach enhances the security and provides robust embedding.

130 citations


Journal ArticleDOI
TL;DR: A two layers fully-connected neural network is used as the generator and the Piecewise Convolutional Neural Networks (PCNNs) as the discriminator and experiment results show that the proposed GAN-based method is effective and performs better than state-of-the-art methods.
Abstract: Recently, many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction (DSRE). These approaches generally use a softmax classifier with cross-entropy loss, which inevitably brings the noise of artificial class NA into classification process. To address the shortcoming, the classifier with ranking loss is employed to DSRE. Uniformly randomly selecting a relation or heuristically selecting the highest score among all incorrect relations are two common methods for generating a negative class in the ranking loss function. However, the majority of the generated negative class can be easily discriminated from positive class and will contribute little towards the training. Inspired by Generative Adversarial Networks (GANs), we use a neural network as the negative class generator to assist the training of our desired model, which acts as the discriminator in GANs. Through the alternating optimization of generator and discriminator, the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well. This framework is independent of the concrete form of generator and discriminator. In this paper, we use a two layers fully-connected neural network as the generator and the Piecewise Convolutional Neural Networks (PCNNs) as the discriminator. Experiment results show that our proposed GAN-based method is effective and performs better than state-of-the-art methods.

109 citations


Journal ArticleDOI
TL;DR: The method proposed can effectively reduce the compute resources consumption, identify DDoS attack at its initial stage with higher detection rate and lower false alarm rate and is identified based on the abnormal probability of the forecasting PDRA sequence.
Abstract: Distributed denial-of-service (DDoS) is a rapidly growing problem with the fast development of the Internet. There're multitude DDoS detection approaches, however, three major problems about DDoS attack detection appear in the big data environment. Firstly, to shorten the respond time of the DDoS attack detector, secondly, to reduce the required compute resources, and lastly, to achieve a high detection rate with low false alarm rate. In the paper, we propose an abnormal network flow feature sequence prediction approach which could fit to be used as a DDoS attack detector in the big data environment and solve aforementioned problems. We define a network flow abnormal index as PDRA with the percentage of old IP addresses, the increment of the new IP addresses, the ratio of new IP addresses to the old IP addresses and average accessing rate of each new IP address. We design an IP address database using sequential storage model which has a constant time complexity. The autoregressive integrated moving average (ARIMA) trending prediction module will be started if and only if the number of continuous PDRA sequence value, which all exceed an PDRA abnormal threshold (PAT), reaches a certain preset threshold. And then calculate the probability that is the percentage of forecasting PDRA sequence value which exceed the PAT. Finally we identify the DDoS attack based on the abnormal probability of the forecasting PDRA sequence. Both theorem and experiment show that the method we proposed can effectively reduce the compute resources consumption, identify DDoS attack at its initial stage with higher detection rate and lower false alarm rate.

98 citations



Journal ArticleDOI
TL;DR: Experimental results illustrate that the proposed novel reversible natural language watermarking method can extract the watermark successfully and recover the original text losslessly and achieves a high embedding capacity.
Abstract: For protecting the copyright of a text and recovering its original content harmlessly, this paper proposes a novel reversible natural language watermarking method by combining arithmetic coding and synonym substitution operations. By analyzing relative frequencies of synonymous words, synonyms employed for carrying payload are quantized into an unbalanced binary sequence, which is redundant. The quantized binary sequence is compressed by arithmetic coding losslessly to provide a spare for accommodating additional data. Then, the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner. On the receiver side, the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text, so that the original context can be perfectly recovered by employing arithmetic coding to decompress the extracted compressed data and substituting the replaced synonyms with their original synonyms. Experimental results illustrate that the proposed method can extract the watermark successfully and recover the original text losslessly. Additionally, it achieves a high embedding capacity.

80 citations


Journal Article
TL;DR: A novel coverless information hiding method based on MSIM, which utilizes the average value of sub-image’s pixels to represent the secret information, according to the mapping between pixel value intervals and secret information.
Abstract: The traditional information hiding methods embed the secret information by modifying the carrier, which will inevitably leave traces of modification on the carrier. In this way, it is hard to resist the detection of steganalysis algorithm. To address this problem, the concept of coverless information hiding was proposed. Coverless information hiding can effectively resist steganalysis algorithm, since it uses unmodified natural stego-carriers to represent and convey confidential information. However, the state-of-the-arts method has a low hidden capacity, which makes it less appealing. Because the pixel values of different regions of the molecular structure images of material (MSIM) are usually different, this paper proposes a novel coverless information hiding method based on MSIM, which utilizes the average value of sub-image’s pixels to represent the secret information, according to the mapping between pixel value intervals and secret information. In addition, we employ a pseudo-random label sequence that is used to determine the position of sub-images to improve the security of the method. And the histogram of the Bag of words model (BOW) is used to determine the number of sub-images in the image that convey secret information. Moreover, to improve the retrieval efficiency, we built a multi-level inverted index structure. Furthermore, the proposed method can also be used for other natural images. Compared with the state-of-the-arts, experimental results and analysis manifest that our method has better performance in anti-steganalysis, security and capacity.

78 citations


Journal ArticleDOI
Jixian Zhang1, Ning Xie, Zhang Xuejie, Yue Kun, Li Weidong, Kumar Deepesh 
TL;DR: By learning a small-scale training set, the prediction model can guarantee that the social welfare, allocation accuracy, and resource utilization in the feasible solution are very close to those of the optimal allocation solution.
Abstract: Resource allocation in auctions is a challenging problem for cloud computing. However, the resource allocation problem is NP-hard and cannot be solved in polynomial time. The existing studies mainly use approximate algorithms such as PTAS or heuristic algorithms to determine a feasible solution; however, these algorithms have the disadvantages of low computational efficiency or low allocate accuracy. In this paper, we use the classification of machine learning to model and analyze the multi-dimensional cloud resource allocation problem and propose two resource allocation prediction algorithms based on linear and logistic regressions. By learning a small-scale training set, the prediction model can guarantee that the social welfare, allocation accuracy, and resource utilization in the feasible solution are very close to those of the optimal allocation solution. The experimental results show that the proposed scheme has good effect on resource allocation in cloud computing.

55 citations


Journal ArticleDOI
TL;DR: A verifiable diversity ranking search scheme over encrypted outsourced data is proposed while preserving privacy in cloud computing, which also supports search results verification, and is effective for the diversification of documents and verification.
Abstract: Data outsourcing has become an important application of cloud computing. Driven by the growing security demands of data outsourcing applications, sensitive data have to be encrypted before outsourcing. Therefore, how to properly encrypt data in a way that the encrypted and remotely stored data can still be queried has become a challenging issue. Searchable encryption scheme is proposed to allow users to search over encrypted data. However, most searchable encryption schemes not consider search result diversification, resulting in information redundancy. In this paper, a verifiable diversity ranking search scheme over encrypted outsourced data is proposed while preserving privacy in cloud computing, which also supports search results verification. The goal is that the ranked documents concerning diversification instead of reading relevant documents that only deliver redundant information. Extensive experiments on real-world dataset validate our analysis and show that our proposed solution is effective for the diversification of documents and verification.

55 citations


Journal ArticleDOI
TL;DR: The paper proposes the method which adds dropout and Batch Normalization operations after each fully-connected layer, to further accelerate the model convergence, and then it can get better classification effect.
Abstract: Road traffic sign recognition is an important task in intelligent transportation system. Convolutional neural networks (CNNs) have achieved a breakthrough in computer vision tasks and made great success in traffic sign classification. In this paper, it presents a road traffic sign recognition algorithm based on a convolutional neural network. In natural scenes, traffic signs are disturbed by factors such as illumination, occlusion, missing and deformation, and the accuracy of recognition decreases, this paper proposes a model called Improved VGG (IVGG) inspired by VGG model. The IVGG model includes 9 layers, compared with the original VGG model, it is added max-pooling operation and dropout operation after multiple convolutional layers, to catch the main features and save the training time. The paper proposes the method which adds dropout and Batch Normalization (BN) operations after each fully-connected layer, to further accelerate the model convergence, and then it can get better classification effect. It uses the German Traffic Sign Recognition Benchmark (GTSRB) dataset in the experiment. The IVGG model enhances the recognition rate of traffic signs and robustness by using the data augmentation and transfer learning, and the spent time is also reduced greatly.

Journal Article
TL;DR: Time optimization models of multiple knowledge transfers in the big data environment are presented by maximizing the total discounted expected profits (DEPs) of an enterprise.
Abstract: In the big data environment, enterprises must constantly assimilate big data knowledge and private knowledge by multiple knowledge transfers to maintain their competitive advantage. The optimal time of knowledge transfer is one of the most important aspects to improve knowledge transfer efficiency. Based on the analysis of complex the characteristics of knowledge transfer in the big data environment, multiple knowledge transfers can be divided into two categories. One is the simultaneous transfer of various types of knowledge, and the other one is multiple knowledge transfers at different time points. Taking into consideration the influence factors, such as the knowledge type, knowledge structure, knowledge absorptive capacity, knowledge update rate, discount rate, market share, profit contributions of each type of knowledge, transfer costs, product life cycle and so on, time optimization models of multiple knowledge transfers in the big data environment are presented by maximizing the total discounted expected profits (DEPs) of an enterprise. Some simulation experiments have been performed to verify the validity of the models, and the models can help enterprises determine the optimal time of multiple knowledge transfer in the big data environment.

Journal ArticleDOI
TL;DR: This work presents a scheme named SecDisplay for trusted display service, it protects sensitive data displayed from being stolen or tampered surreptitiously by a compromised OS and evaluates its performance overhead.
Abstract: While smart devices based on ARM processor bring us a lot of convenience, they also become an attractive target of cyber-attacks. The threat is exaggerated as commodity OSes usually have a large code base and suffer from various software vulnerabilities. Nowadays, adversaries prefer to steal sensitive data by leaking the content of display output by a security-sensitive applications. A promising solution is to exploit the hardware visualization extensions provided by modern ARM processors to construct a secure display path between the applications and the display device. In this work, we present a scheme named SecDisplay for trusted display service, it protects sensitive data displayed from being stolen or tampered surreptitiously by a compromised OS. The TCB of SecDisplay mainly consists of a tiny hypervisor and a super light-weight rendering painter, and has only ~1400 lines of code. We implemented a prototype of SecDisplay and evaluated its performance overhead. The results show that SecDisplay only incurs an average drop of 3.4%.

Journal ArticleDOI
TL;DR: This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks (DCNNs) that achieves real - time forensic tasks by deepening the net work structure.
Abstract: Currently, some photorealistic computer graphics are very similar to photographic images. Photorealistic computer generated graphics can be forged as photographic images, causing serious s ecurity problems. The aim of this work is to use a deep neural network to detect photographic images ( P I) versus computer generated graphics (CG). In existing approaches, image feature classification is computationally intensive and fails to achieve real - t ime analysis. This paper presents an effective approach to automatically identify PI and CG based on deep convolutional neural networks (DCNNs). Compared with some existing methods, the proposed method achieves real - time forensic tasks by deepening the net work structure. Experimental results show that this approach can effectively identify PI and CG with average detection accuracy of 98%.

Journal ArticleDOI
TL;DR: Experimental simulation results show that the cooperation trust evaluation can help solving the trust problem in the container-based cloud environment and can improve the success rate of following cooperation.
Abstract: Container virtual technology aims to provide program independence and resource sharing. The container enables flexible cloud service. Compared with traditional virtualization, traditional virtual machines have difficulty in resource and expense requirements. The container technology has the advantages of smaller size, faster migration, lower resource overhead, and higher utilization. Within container-based cloud environment, services can adopt multi-target nodes. This paper reports research results to improve the traditional trust model with consideration of cooperation effects. Cooperation trust means that in a container-based cloud environment, services can be divided into multiple containers for different container nodes. When multiple target nodes work for one service at the same time, these nodes are in a cooperation state. When multi-target nodes cooperate to complete the service, the target nodes evaluate each other. The calculation of cooperation trust evaluation is used to update the degree of comprehensive trust. Experimental simulation results show that the cooperation trust evaluation can help solving the trust problem in the container-based cloud environment and can improve the success rate of following cooperation.

Journal ArticleDOI
TL;DR: The evaluation results demonstrate that the proposed feature selection method can improve the accuracy and reduce the FNR compare to the traditional feature selection methods, and the multi-label classification framework have better accuracy and lower FNR than the traditional expert system.
Abstract: By using efficient and timely medical diagnostic decision making, clinicians can positively impact the quality and cost of medical care. However, the high similarity of clinical manifestations between diseases and the limitation of clinicians’ knowledge both bring much difficulty to decision making in diagnosis. Therefore, building a decision support system that can assist medical staff in diagnosing and treating diseases has lately received growing attentions in the medical domain. In this paper, we employ a multi-label classification framework to classify the Chinese electronic medical records to establish corresponding relation between the medical records and disease categories, and compare this method with the traditional medical expert system to verify the performance. To select the best subset of patient features, we propose a feature selection method based on the composition and distribution of symptoms in electronic medical records and compare it with the traditional feature selection methods such as chi-square test. We evaluate the feature selection methods and diagnostic models from two aspects, false negative rate (FNR) and accuracy. Extensive experiments have conducted on a real-world Chinese electronic medical record database. The evaluation results demonstrate that our proposed feature selection method can improve the accuracy and reduce the FNR compare to the traditional feature selection methods, and the multi-label classification framework have better accuracy and lower FNR than the traditional expert system.

Journal ArticleDOI
TL;DR: The privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing is proposed, which not only ensures multimedia data security without relying on the trustworthiness of cloud servers, but also guarantees that reversibleData hiding can be operated over encrypted images at the different stages.
Abstract: Advanced cloud computing technology provides cost saving and flexibility of services for users. With the explosion of multimedia data, more and more data owners would outsource their personal multimedia data on the cloud. In the meantime, some computationally expensive tasks are also undertaken by cloud servers. However, the outsourced multimedia data and its applications may reveal the data owner’s private information because the data owners lose the control of their data. Recently, this thought has aroused new research interest on privacy-preserving reversible data hiding over outsourced multimedia data. In this paper, two reversible data hiding schemes are proposed for encrypted image data in cloud computing: reversible data hiding by homomorphic encryption and reversible data hiding in encrypted domain. The former is that additional bits are extracted after decryption and the latter is that extracted before decryption. Meanwhile, a combined scheme is also designed. This paper proposes the privacy-preserving outsourcing scheme of reversible data hiding over encrypted image data in cloud computing, which not only ensures multimedia data security without relying on the trustworthiness of cloud servers, but also guarantees that reversible data hiding can be operated over encrypted images at the different stages. Theoretical analysis confirms the correctness of the proposed encryption model and justifies the security of the proposed scheme. The computation cost of the proposed scheme is acceptable and adjusts to different security levels.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a full-blind DQC protocol with quantum gate set, where the desirable delegated quantum operation, one of, is replaced by a fixed sequence, and the decryption circuit of Toffoli gate is also optimized.
Abstract: The first delegating private quantum computation (DQC) protocol with the universal quantum gate set is proposed by Broadbent et al., and then Tan et al. tried to put forward an half-blind DQC protocol (HDQC) with another universal set . However, the decryption circuit of Toffoli gate (i.e., ) is a little redundant, and Tan et al.’s protocol exists the information leak. In addition, both of these two protocols just focus on the blindness of data (i.e., the client’s input and output), but do not consider the blindness of computation (i.e., the delegated quantum operation). For solving these problems, we propose a full-blind DQC protocol (FDQC) with quantum gate set , where the desirable delegated quantum operation, one of , is replaced by a fixed sequence to make the computation blind, and the decryption circuit of Toffoli gate is also optimized. Analysis shows our protocol can not only correctly perform any delegated quantum computation, but also holds the characteristics of data blindness and computation blindness.

Journal ArticleDOI
TL;DR: This paper proposes an approach for synthesizing complex texture-like image of arbitrary size using a modified deep convolutional generative adversarial network (DCGAN), and demonstrates the feasibility of embedding another image inside the generated texture while the difference between the two images is nearly invisible to the human eyes.
Abstract: Deep neural network has proven to be very effective in computer vision fields. Deep convolutional network can learn the most suitable features of certain images without specific measure functions and outperform lots of traditional image processing methods. Generative adversarial network (GAN) is becoming one of the highlights among these deep neural networks. GAN is capable of generating realistic images which are imperceptible to the human vision system so that the generated images can be directly used as intermediate medium for many tasks. One promising application of using GAN generated images would be image concealing which requires the embedded image looks like not being tampered to human vision system and also undetectable to most analyzers. Texture synthesizing has drawn lots of attention in computer vision field and is used for image concealing in steganography and watermark. The traditional methods which use synthesized textures for information hiding mainly select features and mathematic functions by human metrics and usually have a low embedding rate. This paper takes advantage of the generative network and proposes an approach for synthesizing complex texture-like image of arbitrary size using a modified deep convolutional generative adversarial network (DCGAN), and then demonstrates the feasibility of embedding another image inside the generated texture while the difference between the two images is nearly invisible to the human eyes.

Journal ArticleDOI
TL;DR: The experimental results show that the average localization error of the proposed localization algorithm is significantly higher than the localization accuracy of the existing typical indoor Wi-Fi access point localization methods.
Abstract: Precise localization techniques for indoor Wi-Fi access points (APs) have important application in the security inspection. However, due to the interference of environment factors such as multipath propagation and NLOS (Non-Line-of-Sight), the existing methods for localization indoor Wi-Fi access points based on RSS ranging tend to have lower accuracy as the RSS (Received Signal Strength) is difficult to accurately measure. Therefore, the localization algorithm of indoor Wi-Fi access points based on the signal strength relative relationship and region division is proposed in this paper. The algorithm hierarchically divide the room where the target Wi-Fi AP is located, on the regional division line, a modified signal collection device is used to measure RSS in two directions of each reference point. All RSS values are compared and the region where the RSS value has the relative largest signal strength is located as next candidate region. The location coordinate of the target Wi-Fi AP is obtained when the localization region of the target Wi-Fi AP is successively approximated until the candidate region is smaller than the accuracy threshold. There are 360 experiments carried out in this paper with 8 types of Wi-Fi APs including fixed APs and portable APs. The experimental results show that the average localization error of the proposed localization algorithm is 0.30 meters, and the minimum localization error is 0.16 meters, which is significantly higher than the localization accuracy of the existing typical indoor Wi-Fi access point localization methods.

Journal ArticleDOI
TL;DR: The improved algorithm of outlier detection is presented, which has higher performance to detect outlier and is superior to the existing algorithms.
Abstract: In recent years, the rapid development of big data technology has also been favored by more and more scholars. Massive data storage and calculation problems have also been solved. At the same time, outlier detection problems in mass data have also come along with it. Therefore, more research work has been devoted to the problem of outlier detection in big data. However, the existing available methods have high computation time, the improved algorithm of outlier detection is presented, which has higher performance to detect outlier. In this paper, an improved algorithm is proposed. The SMK-means is a fusion algorithm which is achieved by Mini Batch K-means based on simulated annealing algorithm for anomalous detection of massive household electricity data, which can give the number of clusters and reduce the number of iterations and improve the accuracy of clustering. In this paper, several experiments are performed to compare and analyze multiple performances of the algorithm. Through analysis, we know that the proposed algorithm is superior to the existing algorithms.

Journal ArticleDOI
TL;DR: A new model for learning paragraph vector is developed by combining the CBOW model and CNNs to establish a new deep learning model, and experimental results show that the new model is better than the CBow model in semantic relativeness and accuracy in paragraph vector space.
Abstract: Document processing in natural language includes retrieval, sentiment analysis, theme extraction, etc. Classical methods for handling these tasks are based on models of probability, semantics and networks for machine learning. The probability model is \emph{loss of semantic information} in essential, and it influences the processing accuracy. Machine learning approaches include supervised, unsupervised, and semi-supervised approaches, labeled corpora is necessary for semantics model and supervised learning. The method for achieving a reliably labeled corpus is done manually, it is \emph{costly and time-consuming} because people have to read each document and annotate the label of each document. Recently, the continuous CBOW model is efficient for learning high-quality distributed vector representations, and it can capture a large number of precise syntactic and semantic word relationships, this model can be easily extended to learn paragraph, but it \emph{is not precise}. Towards these problems, this paper is devote to develop a new model for learning paragraph vector, we combine the CBOW model and CNNs to establish a new deep learning model. Experimental results show that the new model is better than the CBOW model in semantic relativeness and accuracy in paragraph vector space.

Journal ArticleDOI
TL;DR: Because the proposed protocol does not require the participation of the classic channel when it implements the transmission of secret information, it does not cause any additional information leakage and has good security.
Abstract: In this paper, a quantum steganography protocol based on Brown entangled states is proposed. The new protocol adopts the CNOT operation to achieve the transmission of secret information by the best use of the characteristics of entangled states. Comparing with the previous quantum steganography algorithms, the new protocol focuses on its anti-noise capability for phase-flip noises, which proved its good security resisting on quantum noise. Furthermore, the covert communication of secret information in the quantum secure direct communication channel would not affect the normal information transmission process due to the new protocol's good imperceptibility. If the number of Brown states transmitted in cover protocol is many enough, the imperceptibility of the secret channel can be further enhanced. In aspect of capacity, the new protocol can further expand its capacity by combining with other quantum steganography protocols. Because the proposed protocol does not require the participation of the classic channel when it implements the transmission of secret information, it does not cause any additional information leakage and has good security. The detailed theoretical analysis shows the conclusions that the new protocol has good performance on imperceptibility, capacity and security.

Journal ArticleDOI
TL;DR: The experimental results show that thenetwork security situation awareness method proposed in this paper can accurately reflect the changes in the network security situation and make predictions on the attack behavior.
Abstract: Network security situation awareness is an important foundation for network security management, which presents the target system security status by analyzing existing or potential cyber threats in the target system. In network offense and defense, the network security state of the target system will be affected by both offensive and defensive strategies. According to this feature, this paper proposes a network security situation awareness method using stochastic game in cloud computing environment, uses the utility of both sides of the game to quantify the network security situation value. This method analyzes the nodes based on the network security state of the target virtual machine and uses the virtual machine introspection mechanism to obtain the impact of network attacks on the target virtual machine, then dynamically evaluates the network security situation of the cloud environment based on the game process of both attack and defense. In attack prediction, cyber threat intelligence is used as an important basis for potential threat analysis. Cyber threat intelligence that is applicable to the current security state is screened through the system hierarchy fuzzy optimization method, and the potential threat of the target system is analyzed using the cyber threat intelligence obtained through screening. If there is no applicable cyber threat intelligence, using the Nash equilibrium to make predictions for the attack behavior. The experimental results show that the network security situation awareness method proposed in this paper can accurately reflect the changes in the network security situation and make predictions on the attack behavior.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method PPCNN is effective and superior to other baseline methods, especially for datasets with transition sentences, and the relative sequence of these features is preserved.
Abstract: Recently, the effectiveness of neural networks, especially convolutional neural networks, has been validated in the field of natural language processing, in which, sentiment classification for online reviews is an important and challenging task. Existing convolutional neural networks extract important features of sentences without local features or the feature sequence. Thus, these models don’t perform well, especially for transition sentences. To this end, we propose a Piecewise Pooling Convolutional Neural Network (PPCNN) for sentiment classification. Firstly, with a sentence presented by word vectors, convolution operation is introduced to obtain the convolution feature map vectors. Secondly, these vectors are segmented according to the positions of transition words in sentences. Thirdly, the most significant feature of each local segment is extracted using max pooling mechanism, and then the different aspects of features can be extracted. Specifically, the relative sequence of these features is preserved. Finally, after processed by the dropout algorithm, the softmax classifier is trained for sentiment classification. Experimental results show that the proposed method PPCNN is effective and superior to other baseline methods, especially for datasets with transition sentences.


Journal ArticleDOI
TL;DR: A novel binary image steganalytic scheme is proposed, which is based on distortion level co-occurrence matrix and experimental results demonstrate that the proposed scheme can effectively detect stego images.
Abstract: In recent years, binary image steganography has developed so rapidly that the research of image steganalysis becomes more important for information security. In most state-of-the-art binary image steganographic schemes, they always find out the flippable pixels to minimize the embedding distortions. For this reason, the stego images generated by the previous schemes maintain visual quality and it is hard for steganalyzer to capture the embedding trace in spacial domain. However, the distortion maps can be calculated for cover and stego images and the difference between them is significant. In this paper, a novel binary image steganalytic scheme is proposed, which is based on distortion level co-occurrence matrix. The proposed scheme first generates the corresponding distortion maps for cover and stego images. Then the co-occurrence matrix is constructed on the distortion level maps to represent the features of cover and stego images. Finally, support vector machine, based on the gaussian kernel, is used to classify the features. Compared with the prior steganalytic methods, experimental results demonstrate that the proposed scheme can effectively detect stego images.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed Steganographic scheme presents high undetectability compared with existing IPMs-based steganographic approaches and also outperforms these schemes on stego video quality.
Abstract: In this paper, an effective intra prediction mode-based video strganography is proposed. Secret messages are embedded during the intra prediction of the video encoding without causing large embedding impact. The influence on the sum of absolute difference (SAD) in intra prediction modes (IPMs) reversion phenomenon is sharp when modifying IPMs. It inspires us to take the SAD prediction deviation (SPD) to define the distortion function. What’s more, the mapping rule between IPMs and the codewords is introduced to further reduce the SPD values of each intra block. Syndrome-trellis code (STC) is used as the practical embedding implementation. Experimental results demonstrate that our proposed steganographic scheme presents high undetectability compared with existing IPMs-based steganographic approaches. It also outperforms these schemes on stego video quality.

Journal ArticleDOI
TL;DR: The structure of IoT device identification is proposed, the key technologies such as signal detection, RFF extraction, and classification model is discussed, and a novel ensemble learning algorithm based on the random forest and Dempster-Shafer evidence algorithm is proposed.
Abstract: In the last decade, IoT has been widely used in smart cities, autonomous driving and Industry 4.0, which lead to improve efficiency, reliability, security and economic benefits. However, with the rapid development of new technologies, such as cognitive communication, cloud computing, quantum computing and big data, the IoT security is being confronted with a series of new threats and challenges. IOT device identification via radio frequency fingerprinting (RFF) extracting from radio signals is a physical-layer method for IoT security. In physical-layer, RFF is an unique characteristic of IoT device themselves, which can difficultly be tampered. Just as people’s unique fingerprinting, different IoT devices exhibit different RFF which can be used for identification and authentication. In this paper, the structure of IoT device identification is proposed, the key technologies such as signal detection, RFF extraction, and classification model is discussed. Especially, based on the random forest and Dempster-Shafer evidence algorithm, a novel ensemble learning algorithm is proposed. Through theoretical modeling and experimental verification, the reliability and differentiability of RFF are extracted and verified., the classification result is shown under the real IoT device environments.

Journal ArticleDOI
TL;DR: This is the first time to propose the coverless image information hiding method based on generative model, compared with the traditional image steganography.
Abstract: In this paper, we propose a novel coverless image steganographic scheme based on a generative model. In our scheme, the secret image is first fed to the generative model database, to generate a meaning-normal and independent image different from the secret images. The generated image is then transmitted to the receiver and fed to the generative model database to generate another image visually the same as the secret image. Thus, we only need to transmit the meaning-normal image which is not related to the secret image, and we can achieve the same effect as the transmission of the secret image. This is the first time to propose the coverless image information hiding method based on generative model, compared with the traditional image steganography. The transmitted image is not embedded with any information of the secret image in this method, therefore, can effectively resist steganalysis tools. Experimental results show that our scheme has high capacity, security and reliability.