scispace - formally typeset
Search or ask a question

Showing papers in "Chinese Journal of Electronics in 2015"


Journal ArticleDOI
TL;DR: The experimental results show that EABC and EABCK outperform other comparative ABC variants and data clustering algorithms, respectively.
Abstract: To improve the performance of K-means clustering algorithm, this paper presents a new hybrid approach of Enhanced artificial bee colony algorithm and K-means (EABCK). In EABCK, the original artificial bee colony algorithm (called ABC) is enhanced by a new mutation operation and guided by the global best solution (called EABC). Then, the best solution is updated by K-means in each iteration for data clustering. In the experiments, a set of benchmark functions was used to evaluate the performance of EABC with other comparative ABC variants. To evaluate the performance of EABCK on data clustering, eleven benchmark datasets were utilized. The experimental results show that EABC and EABCK outperform other comparative ABC variants and data clustering algorithms, respectively.

31 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed steganography algorithm not only gained a high data embedding rate up to 2.4Kbps but also achieved better imperceptibility, which indicates that the algorithm can be used for highData embedding capacity hiding and can achieve good effect.
Abstract: This paper proposes an approach of secure communication through the Internet based on the technology of speech information hiding. In this approach, the algorithm of embedding a 2.4Kbps low-bit-rate Mixexcitation linear prediction (MELP) speech into G.729 coding speech is presented by adapting the techniques of covering code and the interleaving. The parameters in G.729 source codec are analyzed in the Capability of noise tolerance (CNT) and selected to carry secret speech data because these parameters have less impact on the quality of being reconstructed speech. Experiment results show that the proposed steganography algorithm not only gained a high data embedding rate up to 2.4Kbps but also achieved better imperceptibility, which indicates that the algorithm can be used for high data embedding capacity hiding and can achieve good effect.

29 citations


Journal ArticleDOI
TL;DR: A new audio hashing scheme based on non- Negative matrix factorization (NMF) of Modified discrete cosine transform (MDCT) coefficients is proposed that exhibits a high efficiency in terms of discrimination, perceptual robustness identification rate and time consumption.
Abstract: Audio perceptual hashing is a digest of audio contents, which is independent of content preserving manipulations, such as MP3 compression, amplitude scaling, noise addition, etc. It provides a fast and reliable tool for identification, retrieval, and authentication of audio signals. A new audio hashing scheme based on non- Negative matrix factorization (NMF) of Modified discrete cosine transform (MDCT) coefficients is proposed. MDCT coefficients, which have been widely used in audio coding, exhibit good discrimination for different audio contents and highly robustness against content preserving manipulations, especially MDCT based compression such as MP3, AAC, etc. Based on the extraction of MDCT coefficients of the audio frames firstly, NMF is used to construct hash bits. Experiment results demonstrate that, compared with methods mentioned in literature, the proposed scheme exhibits a high efficiency in terms of discrimination, perceptual robustness identification rate and time consumption.

22 citations


Journal ArticleDOI
TL;DR: Theoretical analysis and simulation results showed that the proposed MS algorithm overcame the shortcoming of existing tree-based algorithms and exhibited good performanceduring identification.
Abstract: Deterministic tree-based algorithms aremostly used to guarantee that all the tags in the readerfield are successfully identified, and to achieve the bestperformance. Through an analysis of the deficiencies of existing tree-based algorithms, a Q-ary search algorithm wasproposed. The Q-ary search (QAS) algorithm introduceda bit encoding mechanism of tag ID by which the multibit collision arbitration was implemented. According to theencoding mechanism, the collision cycle was reduced. Thetheoretical analysis and simulation results showed that theproposed MS algorithm overcame the shortcoming of existing tree-based algorithms and exhibited good performanceduring identification.

21 citations


Journal ArticleDOI
TL;DR: This paper improves classic escape time algorithm into cloud environment to improve its performance and provides a separation method of escape algorithm in cloud environment and calculates complexity of the novel algorithm with a probability model based on allocation policy.
Abstract: Since fractal is widely used in all science domains today, escape time algorithm, which is the most effective algorithm in drawing fractal figures, shows negatively when generation function is complex. In this paper, we improve classic escape time algorithm into cloud environment to improve its performance. At first, we provide a separation method of escape algorithm in cloud environment. Then we calculate complexity of the novel algorithm with a probability model based on allocation policy. At last, we use generalized fractal sets as experimental subjects to validate our conclusion. Experimental results show correctness and rapidness of the novel algorithm.

20 citations


Journal ArticleDOI
TL;DR: This paper presents a new approach, from the viewpoint of correcting the mislabeled instances, to find deceptive opinion spam, and displays significant improvements in the method in contrast to the existing baselines.
Abstract: Assessing the trustworthiness of reviews is a key in natural language processing and computational linguistics. Previous work mainly focuses on some heuristic strategies or simple supervised learning methods, which limit the performance of this task. This paper presents a new approach, from the viewpoint of correcting the mislabeled instances, to find deceptive opinion spam. Partition a dataset into several subsets, construct a classifier set for each subset and select the best one to evaluate the whole dataset. Error variables are defined to compute the probability that the instances have been mislabeled. The mislabeled instances are corrected based on two threshold schemes, majority and non-objection. The results display significant improvements in our method in contrast to the existing baselines.

18 citations


Journal ArticleDOI
TL;DR: This paper presents an attribute-based secure data sharing scheme with Efficient revocation (EABDS) in cloud computing that first encrypts data with Data encryption key (DEK) using symmetric encryption and then encrypts DEK based on CP-ABE, which guarantees the data confidentiality and achieves fine-grained access control.
Abstract: Ciphertext-policy attribute-based encryption (CP-ABE) is becoming a promising solution to guarantee data security in cloud computing. In this paper, we present an attribute-based secure data sharing scheme with Efficient revocation (EABDS) in cloud computing. Our scheme first encrypts data with Data encryption key (DEK) using symmetric encryption and then encrypts DEK based on CP-ABE, which guarantees the data confidentiality and achieves fine-grained access control. In order to solve the key escrow problem in current attribute based data sharing schemes, our scheme adopts additively homomorphic encryption to generate attribute secret keys of users by attribute authority in cooperation with key server, which prevents attribute authority from accessing the data by generating attribute secret keys alone. Our scheme presents an immediate attribute revocation method that achieves both forward and backward security. The computation overhead of user is also reduced by delegating most of the decryption operations to the key server. The security and performance analysis results show that our scheme is more secure and efficient.

17 citations


Journal ArticleDOI
TL;DR: A novel compressive sensing-based audio semi-fragile zero-watermarking algorithm that improves malicious tampering detection accuracy in common audio signal processing environments.
Abstract: A novel compressive sensing-based audio semi-fragile zero-watermarking algorithm is proposed in this paper. This algorithm transforms the original audio signal into the wavelet domain and applies compressive sensing theory to the approximation wavelet coefficients. The zero-watermarking is constructed according to the positive and negative properties of elements in the measurement vector. The experimental results show that the proposed algorithm is robust against common audio signal processing and fragile to malicious tampering. Compared with the existing algorithms, the proposed algorithm improves malicious tampering detection accuracy in common audio signal processing environments.

15 citations


Journal ArticleDOI
TL;DR: An incremental feature selection algorithm in dynamic decision systems is developed based on dependency function, which avoids some recomputations, rather than retrain the dynamic decision system as new one to compute the feature subset from scratch.
Abstract: Feature selection is a challenging problem in pattern recognition and machine learning. In real-life applications, feature set in the decision systems may vary over time. There are few studies on feature selection with the variation of feature set. This paper focuses on this issue, an incremental feature selection algorithm in dynamic decision systems is developed based on dependency function. The incremental algorithm avoids some recomputations, rather than retrain the dynamic decision system as new one to compute the feature subset from scratch. We firstly employ an incremental manner to update the new dependency function, then we incorporate the calculated dependency function into the incremental feature selection algorithm. Compared with the direct (non-incremental) algorithm, the computational efficiency of the proposed algorithm is improved. The experimental results on different data sets from UCI show that the proposed algorithm is effective and efficient.

15 citations


Journal ArticleDOI
Yubo Men1, Guoyin Zhang1, Chaoguang Men1, Xiang Li1, Ning Ma1 
TL;DR: In this paper, a four-moded census transform stereo matching algorithm using bidirectional constraint dynamic programming and relative confidence plane fitting is proposed to solve the problems of matching quality using the census transform which adds a restrictive condition replaces traditional Census transform to improve matching accuracy and mean value of all pixels intensity replaces the center pixel intensity in the Census window.
Abstract: A four-moded Census transform stereo matching algorithm using bidirectional constraint dynamic programming and relative confidence plane fitting is proposed to solve the problems of matching quality. Using the four-moded Census transform which adds a restrictive condition replaces traditional Census transform to improve matching accuracy and mean value of all pixels intensity replaces the center pixel intensity in the Census window to solve the problem of the center pixel distortion effectively, a refined initial local matching cost can be obtained. During the disparity optimization, the difficulty of disparity computation in textureless areas is overcome by the estimated condition and defined relative confident pixels. Experiment results show that a better dense matching map can be obtained by the proposed algorithm.

15 citations


Journal ArticleDOI
TL;DR: Empirical results show that EvoQ can efficiently generate test cases for SUT with ICH and NPM and achieves higher branch coverage than two state-of-the-art test generation approaches within the same time budget.
Abstract: Recent advances in evolutionary test generation greatly facilitate the testing of Object-oriented (OO) software. Existing test generation approaches are still limited when the Software under test (SUT) includes Inherited class hierarchies (ICH) and Non-public methods (NPM). This paper presents an approach to generate test cases for OO software via integrating evolutionary testing with reinforcement learning. For OO software with ICH and NPM, two kinds of particular isomorphous substitution actions are presented and a Q-value matrix is maintained to assist the evolutionary test generation. A prototype called EvoQ is developed based on this approach and is applied to generate test cases for actual Java programs. Empirical results show that EvoQ can efficiently generate test cases for SUT with ICH and NPMand achieves higher branch coverage than two state-of-the-art test generation approaches within the same time budget.

Journal ArticleDOI
TL;DR: Simulation results show that the QC Low-density paritycheck codes can perform well in comparison with a variety of other LDPC codes, and have excellent error floor and decoding convergence characteristics.
Abstract: Quasi-cyclic (QC) Low-density paritycheck (LDPC) codes are constructed from combination of weight-0 (null matrix) and Weight-2 (W2) Circulant matrix (CM), which can be seen as a special case of the general type-II QC LDPC codes. The shift matrix of the codes is built on the basis of one integer sequence, called perfect Cyclic difference set (CDS), which guarantees the girth of the code at least six. Simulation results show that the codes can perform well in comparison with a variety of other LDPC codes. They have excellent error floor and decoding convergence characteristics.

Journal ArticleDOI
TL;DR: A new variable selection method-the logistic elastic net is proposed and it is proved that it has grouping effect which means that the strongly correlated predictors tend to be in or out of the model together.
Abstract: Variable selection is one of the most important problems in pattern recognition. In linear regression model, there are many methods can solve this problem, such as Least absolute shrinkage and selection operator (LASSO) and many improved LASSO methods, but there are few variable selection methods in generalized linear models. We study the variable selection problem in logistic regression model. We propose a new variable selection method-the logistic elastic net, prove that it has grouping effect which means that the strongly correlated predictors tend to be in or out of the model together. The logistic elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the LASSO is not a very satisfactory variable selection method in the case when p is more larger than n. The advantage and effectiveness of this method are demonstrated by real leukemia data and a simulation study.

Journal ArticleDOI
TL;DR: This work proposes a test data generation method for multi-path coverage based on a genetic algorithm with local evolution that can improve the utilization efficiency of test data.
Abstract: Generating test data by genetic algorithms is a promising research direction in software testing, among which path coverage is an important test method. The efficiency of test data generation for multi-path coverage needs to be further improved. We propose a test data generation method for multi-path coverage based on a genetic algorithm with local evolution. The mathematical model is established for all target paths, while in the algorithm the individuals are evolved locally according to different objective functions. We can improve the utilization efficiency of test data. The computation cost can be reduced by using fitness functions of different granularity in different phases of the algorithm.

Journal ArticleDOI
TL;DR: Using real commercial deployment, the result of application test bed shows that the IPv6 hosts could be interactive with IP-based sensor nodes through 6LoWPAN gateway with acceptable latency and packet loss.
Abstract: In the paradigms of the Internet of things (IoT), sensor nodes could collect the data through the Wireless sensor network (WSN), and could be managed and accessed by human users through the Internet. There are many challenges remain. Taking advantage of IPv6, IPv6 over Low-power personal area networks (6LoWPANs) implemented on resource constrained devices makes the connection possible and easier. This paper is contributing on constructing architecture of the end-to-end communication system based on 6LoWPAN gateway, which enabled convergence of IPv6 network and low-power wireless networking, featuring encapsulating 6LoWPAN adapter layer in a Network adapter driver (NAD) of personal computer. Using real commercial deployment, the result of application test bed shows that the IPv6 hosts could be interactive with IP-based sensor nodes through 6LoWPAN gateway with acceptable latency and packet loss.

Journal ArticleDOI
TL;DR: A novel patch-based dark channel prior dehaze method is proposed for solvingRemote sensing multi-spectral images problems, and the proposed method is superior to other relative methods in terms of image quality evaluations.
Abstract: Remote sensing(RS) multi-spectral images are usually suffered from cloud and fog cover, which can lead to analysis troubles and application limitations. A novel patch-based dark channel prior dehaze method is proposed for solving this problem. An Atmospheric light (AL) curved surface hypothesis, instead of globally invariable plane, is applied to describe AL distribution, and a patch-based approach is given to estimate curved surface. By using AL curved surface estimation, a new recovering model for RS multi-spectral images is given to obtain dehaze-free images. Comparative experiments are conducted, those results illustrate that the proposed method can produces visually impressive restored images, and the proposed method is superior to other relative methods in terms of image quality evaluations.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that LCMR could evenly spread the traffic over the network with increasing network throughput in a heavy load at the expense of some coding opportunities.
Abstract: The growth of network coding opportunitiesis considered the unique optimization goal by mostcurrent network coding based routing algorithms for wirelessmesh networks. This usually results in flows aggregationproblem in areas with coding opportunities, and degradesthe network performance. This paper proposes aLoad balanced coding aware multipath routing (LCMR)for wireless mesh networks. To facilitate the evaluation ofdiscovered multiple paths and the tradeoffs between codingopportunity and load balancing, a novel routing metric,Load balanced coding aware routing metric (LCRM) ispresented, which considers the load degree of nodes whendetects coding opportunities. LCMR could spread trafficover multipath to further balance load. Simulation results demonstrate that LCMR could evenly spread the traffic over the network with increasing network throughput in a heavy load at the expense of some coding opportunities.

Journal ArticleDOI
TL;DR: Simulation results and analysis show that the proposed dynamic watermarking scheme has better visual quality under a higher embedding capacity.
Abstract: A novel watermarking scheme for quantum image is proposed based on Quantum cosine transform (QCT). The cosine coefficients are extracted by executing QCT on quantum image. A dynamic vector for controlling embedding strength instead of a fixed parameter for embedding process in other scheme is utilized to process quantum watermark. The adder operation and the inverse QCT are implemented which offset the QCT and guarantee the embedding process having smaller impact on the quantum carrier image. Simulation results and analysis show that the proposed dynamic watermarking scheme has better visual quality under a higher embedding capacity.

Journal ArticleDOI
TL;DR: Results show that the proposed protocol provides better energy efficiency and long lifetime in wireless sensor networks compared to the existing DMAC protocol.
Abstract: Wireless sensor networks (WSNs) havebeen employed as an ideal solution in many applications fordata gathering in harsh environment. Energy consumptionis a key issue in wireless sensor networks since nodes areoften battery operated. Medium access control (MAC) protocol plays an important role in energy efficiency in wireless sensor networks because nodes’ access to the sharedmedium is coordinated by the MAC layer. An energy efficient MAC protocol is designed for data gathering inlinear wireless sensor networks. In order to enhance theperformance, when a source node transmits data to thesink, proper relay nodes are selected for forwarding dataaccording to the energy consumption factor and residualenergy balance factor. Some simulation experiments areconducted and the results show that, the proposed protocol provides better energy efficiency and long lifetime thanthe existing DMAC protocol.

Journal ArticleDOI
Lei Zhang1, Tiecheng Song1, Ming Wu1, Xu Bao2, Jie Guo1, Jing Hu1 
TL;DR: The traffic-adaptive spectrum handoff strategy is proposed for graded SUs so as to minimize the average cumulative handoff delay and the effect of service rate on the proposed spectrum switching point and the admissible access region are provided.
Abstract: In order to meet different delay requirements of various communication services in Cognitive radio (CR) networks, Secondary users (SUs) are divided into two classes according to the priority of accessing to spectrum in this paper. Based on the proactive spectrum handoff scheme, the Preemptive resume priority (PRP) M/G/1 queueing is used to characterize multiple spectrum handoffs under two different spectrum handoff strategies. The traffic-adaptive spectrum handoff strategy is proposed for graded SUs so as to minimize the average cumulative handoff delay. Simulation results not only verify that our theoretical analysis is valid, but also show that the strategy we proposed can reduce the average cumulative handoff delay evidently. The effect of service rate on the proposed spectrum switching point and the admissible access region are provided.

Journal ArticleDOI
TL;DR: A novel method for estimating kernel function was designed that can estimate the performance of kernel function roundly, and can choose the best kernel function in different application demand for recognition.
Abstract: The current electromagnetism environment is fast changing and levity, the methods for evaluation Suppont vector machine (SVM) kernel functions which are used in radar signal recognition can not suit it. So kernel space separate, stability and parameter numbers were proposed in this paper to review the performance of kernel function, and a novel method for estimating kernel function was designed. By simulation, this novel method can estimate the performance of kernel function roundly, and can choose the best kernel function in different application demand for recognition.

Journal ArticleDOI
TL;DR: A kind of access control scheme based on attribute encryption is designed, in which lightweight devices can safely use cloud computing resources to outsource encrypt/decrypt operations, and not worry for exposing terminal sensitive data.
Abstract: Cloud computing services have got rapid development in the field of the lightweight terminal, especially wireless communications. The comprehensive access control system framework is proposed for the cloud. A kind of access control scheme based on attribute encryption is designed, in which lightweight devices can safely use cloud computing resources to outsource encrypt/decrypt operations, and not worry for exposing terminal sensitive data. The scheme is verified by performance evaluation about the security, computing, storage, to ensure the legitimate interests of users in the cloud.

Journal ArticleDOI
TL;DR: A new PTS method is proposed to search for suboptimal rotating vectors in this OFDM system and can achieve better PAPR reduction and significantly reduce the computational complexity.
Abstract: Partial transmit sequence (PTS) is one of effective technique to reduce high Peak-to-average power ratio (PAPR) in Orthogonal frequency division multiplex- ing (OFDM) system. However, the complexity of Original PTS (O-PTS) increases exponentially with the number of sub-blocks. To reduce the computational complexity while still offering a lower PAPR, a new PTS method is pro- posed to search for suboptimal rotating vectors in this pa- per. In the proposed method, the candidate rotation vec- tors are generated based on greedy and genetic algorithm. We also combine the proposed method and the superim- posed training sequence method to get a further PAPR reduction. The theory and simulations results show that the proposed method can achieve better PAPR reduction and significantly reduce the computational complexity.

Journal ArticleDOI
TL;DR: The proposed LGDDP method lies in the high recognition rate performance and the low computational complexity and the experimental results verify theLGDDP’s effectiveness by comparing it with other well-known published face recognition methods.
Abstract: We propose a novel face image representation – Local gabor dominant direction pattern (LGDDP) for face recognition. The face image is convolved with the Gabor filters, resulting in multiple response images of different orientations and scales. The response images’ each pixel is encoded by the LGDDP descriptor from the pixel’s dominant neighboring one or two pixels. The image formed by the LGDDP descriptor is partitioned into multiple regions and the histogram is extracted from each region. All the histograms are concatenated into the spatial histogram. The nearest neighbor classifier and the weighted intersection histogram similarity measure are used for face image classification. The advantage of the proposed LGDDP method lies in the high recognition rate performance and the low computational complexity. Extensive experiments are performed on FERET face image database and the experimental results verify the LGDDP’s effectiveness by comparing LGDDP with other well-known published face recognition methods.

Journal ArticleDOI
TL;DR: The proposed approach reduces unnecessary membranes and communication rules by defining two membranes with many objects and rules inside each membrane, which makes the model suitable for implementation on a Graphics processing unit (GPU).
Abstract: Previous approaches using active membrane systems to solve the N-queens problem defined manymembranes with just one rule inside them. This resultedin many communication rules utilised to communicate between membranes, which made communications betweenthe cores and the threads a very time-consuming process.The proposed approach reduces unnecessary membranesand communication rules by defining two membranes withmany objects and rules inside each membrane. With thisstructure, objects and rules can evolve concurrently in parallel, which makes the model suitable for implementationon a Graphics processing unit (GPU). The speedup usinga GPU with global memory for N=10 is 10.6 times, butusing tiling and shared memory, it is 33 times.

Journal ArticleDOI
TL;DR: An efficient approach for the deployment of sensor nodes in wireless networks, termed as EDSNDA, which is excellent in taking both the requirements of sensor coverage and network connectivity into consideration when minimizing the number of necessary sensor nodes to the best of its ability is proposed.
Abstract: Efficient sensor node deployment is extremely important in wireless sensor networks. It earns great practical meanings through using fewer sensor nodes as far as possible to satisfy different requirements such as the requirement on coverage and overcoming the potential sensor node failures and the adverse influence from the environment. We propose an efficient approach for the deployment of sensor nodes in wireless networks, termed as EDSNDA, which is excellent in taking both the requirements of sensor coverage and network connectivity into consideration when minimizing the number of necessary sensor nodes to the best of its ability. We proposed a new coverage model of sensor node. Based on the sensor coverage model, we establish four dynamic programming models in four different practical situations, respectively. The algorithms are then proposed which are used for solving the corresponding dynamic programming models. The validity of the method is justified by simulation studies in which the method is compared with the current representative methods. The simulation results show that our method performs better than the other ones with fewer sensor nodes, better coverage and network connectivity result in the same circumstance.

Journal ArticleDOI
Yin Baiqiang1, Yigang He1, Bing Li1, Zuo Lei1, Lifen Yuan1 
TL;DR: In this paper, an adaptive singular value decomposition (SVD) method for solving the pass-region problem is proposed, which removes the smaller singular values and keeps the larger singular values.
Abstract: S-transform (ST) is an excellent tool for time-frequency filter. There are two factors that influence filtering performance: Inverse s-transform (IST) algorithms and the pass-regions in time-frequency domain. A novel matrix IST algorithm is derived and an adaptive Singular value decomposition (SVD) method for solving the pass-region problem is proposed. The former can avoid reconstructing errors in time-frequency filtering; the latter is effective to distinguish the pass-region of signal from noise. Filter can be realized by removing the smaller singular values and keeping the larger singular values. An additive noise perturbation model is built in ST time-frequency domain and the effective rank of noise perturbation model based on matrix IST is analyzed. Simulation results indicate that the proposed SVD method can provide higher precision than the existing ones at low signal-to-noise ratio and does not need to compute the noise statistics property. Illustrative examples verify the effectiveness of proposed method.

Journal ArticleDOI
TL;DR: A leveled FHE scheme based on the Ring learning with errors (RLWE) problem is put forward by simultaneously applying both batch tech- niques available, which allows double pack- ing many plaintext values into each ciphertext to support single-instruction-multiple-data-type operations, which reduces the ciphertext expansion ratio.
Abstract: To further improve the efficiency of Fully homomorphic encryption (FHE), a leveled FHE scheme based on the Ring learning with errors (RLWE) problem is put forward by simultaneously applying both batch tech- niques available. Our scheme therefore allows double pack- ing many plaintext values into each ciphertext to support single-instruction-multiple-data-type operations, which ef- fectively reduces the ciphertext expansion ratio. An effi- cient evolutionary method for achieving arbitrary homo- morphic permutation operations on a packed ciphertext is also provided by using several given key-switching hints. Further, a few new operations are introduced, with which not only to describe the key switching process in our batch setting clearly, but also to analyze the noise growth conve- niently.

Journal ArticleDOI
TL;DR: The excellent model of collective rotation noise analysis is introduced and the discussing of the security of SAGR04 protocol is based on the method of information theory.
Abstract: Noise exists in the actual communicationenvironment. It is necessary and significant to analyze thesecurity of SAGR04 protocol in the noise environment. Anexcellent model of collective rotation noise analysis is introduced and the discussing of the security of SAGR04 protocol is based on the method of information theory. Theeavesdropping can be detected for the increment of thequbit error rate and eavesdropper can maximally get about50% of the keys. It can be concluded that the SAGR04 protocol, used as quantum key distribution, is secure.

Journal ArticleDOI
TL;DR: Avisible and infrared video fusion method based on Uniformdiscrete curvelet transform (UDCT) and spatial-temporalinformation is proposed and outperforms comparison methods in terms of temporal stability and consistency as well as temporal-temporal information extraction.
Abstract: Multiple visual sensor fusion provides aneffective way to improve the robustness and accuracy ofvideo surveillance system. Traditional video fusion methods fuse the source videos using static image fusion methods frame-by-frame without considering the informationin temporal dimension. The temporal information can’t befully utilized in fusion procedure. Aiming at this problem, avisible and infrared video fusion method based on Uniformdiscrete curvelet transform (UDCT) and spatial-temporalinformation is proposed. The source videos are decomposedby using UDCT, and a set of local spatial-temporal energybased fusion rules are designed for decomposition coefficients. In these rules, we consider the current frame’s coefficients and the coefficients on temporal dimension whichare the coefficients of adjacent frames. Experimental results demonstrated that the proposed method works welland outperforms comparison methods in terms of temporalstability and consistency as well as spatial-temporal information extraction.