scispace - formally typeset
Search or ask a question

Showing papers by "Xidian University published in 2006"


Journal ArticleDOI
TL;DR: The developed technique is applied to networks consisting of nodes with unknown but bounded nonlinear functions, and a typical example of a complex network with chaotic nodes is finally used to verify the theoretical results and the effectiveness of the proposed synchronization scheme.
Abstract: Global synchronization and asymptotic stability of complex dynamical networks are investigated in this paper. Based on a reference state, a sufficient condition for global synchronization and stability is derived. Unlike other approaches where only local results were obtained, the complex network is not linearized in this paper. Instead, the sufficient condition for the global synchronization and asymptotical stability is obtained here by introducing a reference state with the Lyapunov stability theorem rather than the Lyapunov exponents, and this condition is simply given in terms of the network coupling matrix therefore is very convenient to use. Furthermore, the developed technique is applied to networks consisting of nodes with unknown but bounded nonlinear functions. A typical example of a complex network with chaotic nodes is finally used to verify the theoretical results and the effectiveness of the proposed synchronization scheme

318 citations


Journal ArticleDOI
TL;DR: In this article, the Banach contraction principle and Caristi's fixed point theorem are generalized to the case of multi-valued mappings, and the results are extensions of the well-known Nadler's fixed-point theorem [S.B. Nadler Jr., Multi-valued contraction mappings.

250 citations


01 Jan 2006
TL;DR: The paradigm of network error correction as a generalization of classical link-by-link error correction was introduced and the network generalizations of the Hamming bound and the Singleton bound in classical algebraic coding theory were obtained.
Abstract: In Part I of this paper, we introduced the paradigm of network error correction as a generalization of classical link-by-link error correction. We also obtained the network generalizations of the Hamming bound and the Singleton bound in classical algebraic coding theory. In Part II, we prove the network generalization of the Gilbert-Varshamov bound and its enhancement. With the latter, we show that the tightness of the Singleton bound is preserved in the network setting. We also discuss the implication of the results in this paper. Definition 2. An etwork code ist-error-correcting if it can correct all τ -errors for τ ≤ t, i.e., if the total number of errors in the network is at most t, then the source message can be recovered by all the sink nodes u ∈U.A network code is Y-error-correcting if it can correct E-errors for all E ∈ Y. In Part I, we have proved the network generalizations of the Hamming bound and the Singleton bound. In this part, we will prove a network generalization of the Gilbert-Varshamov bound and its enhancement. With the latter, we will show that the tightness of the Singleton bound is preserved in the network setting. The rest of Part II is organized as follows. In Section 2, we prove the Gilbert bound and the Varshamov bound for network error-correcting codes. In Section 3, we sharpen the Varshamov bound obtained in Section 2 to the strengthened Varshamov bound. By means of the latter, we prove the tightness of the Singleton bound for

232 citations


Book ChapterDOI
01 Jan 2006
TL;DR: The preliminary results show that this method can detect the falls effectively, and reduce the probability of being damaged in the experiments for the elderly people.
Abstract: The fall is a crucial problem in the elderly people’s daily life, and the early detection of fall is very important to rescue the subjects and avoid the badly prognosis. In this paper, we use a wearable tri-axial accelerometer to capture the movement data of human body, and propose a novel fall detection method based on one-class support vector machine (SVM). The one-class SVM model is trained by the positive samples from the falls of younger volunteers and a dummy, and the outliers from the non-fall daily activities of younger and the elderly volunteers. The preliminary results show that this method can detect the falls effectively, and reduce the probability of being damaged in the experiments for the elderly people.

225 citations


Journal ArticleDOI
TL;DR: This paper develops a two-stage approach to synthesizing liveness-enforcing supervisors for flexible manufacturing systems (FMS) that can be modeled by a class of Petri nets that is more efficient and structurally simpler than all the known existing methods.
Abstract: This paper develops a two-stage approach to synthesizing liveness-enforcing supervisors for flexible manufacturing systems (FMS) that can be modeled by a class of Petri nets. First, we find siphons that need to be controlled using a mixed integer programming (MIP) method. This way avoids complete siphon enumeration that is more time-consuming for a sizable plant model than the MIP method. Monitors are added for only those siphons that require them. Second, we rearrange the output arcs of the monitors on condition that liveness is still preserved. The liveness is verified by an MIP-based deadlock detection method instead of much time-consuming reachability analysis. Experimental studies show that the proposed approach is more efficient than the existing ones and can result in more permissive and structurally simpler liveness-enforcing supervisors than all the known existing methods. This paper makes the application of siphon-based deadlock control methods to industrial-size FMS possible

221 citations


Journal ArticleDOI
TL;DR: The biological background of DNA cryptography and the principle of DNA computing is introduced, the progress of DNA cryptographic research and several key problems are summarized, and the status, security and application fields ofDNA cryptography with those of traditional cryptography and quantum cryptography are compared.
Abstract: DNA cryptography is a new born cryp- tographic field emerged with the research of DNA computing, in which DNA is used as information car- rier and the modern biological technology is used as implementation tool. The vast parallelism and ex- traordinary information density inherent in DNA molecules are explored for cryptographic purposes such as encryption, authentication, signature, and so on. In this paper, we briefly introduce the biological background of DNA cryptography and the principle of DNA computing, summarize the progress of DNA cryptographic research and several key problems, discuss the trend of DNA cryptography, and compare the status, security and application fields of DNA cryptography with those of traditional cryptography and quantum cryptography. It is pointed out that all the three kinds of cryptography have their own ad- vantages and disadvantages and complement each other in future practical application. The current main difficulties of DNA cryptography are the absence of effective secure theory and simple realizable method. The main goal of the research of DNA cryptography is exploring characteristics of DNA molecule and reac- tion, establishing corresponding theories, discovering possible development directions, searching for sim- ple methods of realizing DNA cryptography, and lay- ing the basis for future development.

168 citations


Journal ArticleDOI
TL;DR: The latest research works on the synchronization scheme for either continuous transmission mode or burst packet transmission mode for the wireless OFDM communications are overviewed and three improved methods for the fine symbol timing synchronization in frequency domain are proposed.
Abstract: The latest research works on the synchronization scheme for either continuous transmission mode or burst packet transmission mode for the wireless OFDM communications are overviewed in this paper. The typical algorithms dealing with the symbol timing synchronization, the carrier frequency synchronization as well as the sampling clock synchronization are briefly introduced and analyzed. Three improved methods for the fine symbol timing synchronization in frequency domain are also proposed, with several key issues on the synchronization for the OFDM systems discussed.

160 citations


Journal ArticleDOI
01 Nov 2006
TL;DR: In this correspondence, elementary siphons of Petri nets are redefined and the significance of this improvement is shown.
Abstract: The concept of elementary siphons of Petri nets is first proposed in our previous work. However, their definitions can cause confusion when there exist weakly independent siphons in a net. In this correspondence, we redefine elementary siphons and show the significance of this improvement

152 citations


Journal ArticleDOI
TL;DR: The main idea of the approach is to use a STAP-based method to properly overcome the aliasing effect caused by the lower pulse-repetition frequency (PRF) and retrieve the unambiguous azimuth wide (full) spectrum signals from the received echoes.
Abstract: A new concept of spaceborne synthetic aperture radar (SAR) implementation has recently been proposed - the constellation of small spaceborne SAR systems. In this implementation, several formation-flying small satellites cooperate to perform multiple space missions. We investigate the possibility to produce high-resolution wide-area SAR images and fine ground moving-target indicator (GMTI) performance with constellation of small spaceborne SAR systems. In particular, we focus on the problems introduced by this particular SAR system, such as Doppler ambiguities, high sparseness of the satellite array, and array element errors. A space-time adaptive processing (STAP) approach combined with conventional SAR imaging algorithms is proposed which can solve these problems to some extent. The main idea of the approach is to use a STAP-based method to properly overcome the aliasing effect caused by the lower pulse-repetition frequency (PRF) and thereby retrieve the unambiguous azimuth wide (full) spectrum signals from the received echoes. Following this operation, conventional SAR data processing tools can be applied to focus the SAR images fully. The proposed approach can simultaneously achieve both high-resolution SAR mapping of wide ground scenes and GMTI with high efficiency. To obtain array element errors, an array auto-calibration technique is proposed to estimate them based on the angular and Doppler ambiguity analysis of the clutter echo. The optimizing of satellite formations is also analyzed, and a platform velocity/PRF criterion for array configurations is presented. An approach is given to make it possible that almost any given sparse array configuration can satisfy the criterion by slightly adjusting the PRF. Simulated results are presented to verify the effectiveness of the proposed approaches.

151 citations


Journal ArticleDOI
TL;DR: A special attack strategy to the multiparty quantum secret sharing protocol is come up with, using fake signal and Bell measurement, the agent Bob who generates the initial signals can elicit Alice's secret message.

144 citations


Journal ArticleDOI
TL;DR: A statistical model comprising two distribution forms, i.e., Gamma distribution and Gaussian mixture distribution, to model echoes of different types of range cells as the corresponding distribution forms is developed, which has better recognition performance but also is more robust to noises than the two existing statistical models.
Abstract: In the statistical target recognition based on radar high-resolution range profile (HRRP), two challenging tasks are how to deal with the target-aspect, time-shift, and amplitude-scale sensitivity of HRRP and how to accurately describe HRRPs statistical characteristics. In this paper, based on the scattering center model, range cells are classified, in accordance with the number of predominant scatterers in each cell, into three statistical types. After resolving the three sensitivity problems, this paper develops a statistical model comprising two distribution forms, i.e., Gamma distribution and Gaussian mixture distribution, to model echoes of different types of range cells as the corresponding distribution forms. Determination of the type of a range cell is achieved by using the rival penalized competitive learning (RPCL) algorithm, while estimation for the parameters of Gamma distribution and Gaussian mixture distribution by the maximum likelihood (ML) method and the expectation-maximization (EM) algorithm, respectively. Experimental results for measured data show that the proposed statistical model not only has better recognition performance but also is more robust to noises than the two existing statistical models, i.e., Gaussian model and Gamma model.

Journal ArticleDOI
Abstract: In this paper, we propose a novel modified T-shaped planar monopole antenna in that two asymmetric horizontal strips are used as additional resonators to produce the lower and upper resonant modes. As a result, a dual-band antenna for covering 2.4- and 5-GHz wireless local area network (WLAN) bands is implemented. In order to expand the lower band, a multiband antenna for covering the digital communications systems, personal communications systems, Universal Mobile Telecommunications Systems, and 2.4/5-GHz WLAN bands is also developed. Prototypes of the multiband antenna have been successfully implemented. Good omnidirectional radiation in the desired frequency bands has been achieved. The proposed multiband antenna with relatively low profile is very suitable for multiband mobile communication systems

Proceedings ArticleDOI
23 Oct 2006
TL;DR: The results of the experiments show that these two new strategies of natural exponential functions converge faster than linear one during the early stage of the search process, which is good news for most continuous optimization problems.
Abstract: Inertia weight is one of the most important parameters of particle swarm optimization (PSO) algorithm. Based on the basic idea of decreasing inertia weight (DIW), two strategies of natural exponential functions were proposed. Four different benchmark functions were used to evaluate the effects of these strategies on the PSO performance. The results of the experiments show that these two new strategies converge faster than linear one during the early stage of the search process. For most continuous optimization problems, these two strategies perform better than the linear one.

Journal ArticleDOI
TL;DR: A robust automatic white balance algorithm is proposed in this paper, using extracting gray color points in images for color temperature estimation and has the advantage of easy realization, low complexity and robust convergence.
Abstract: A robust automatic white balance algorithm is proposed in this paper, using extracting gray color points in images for color temperature estimation. A gray color point is the point where R, G and B components are equivalent under the canonical light source. A little color deviation of the gray color point from gray under different color temperature is used to estimate the color temperature of the light source. The test results show that the proposed algorithm can provide a good perceive effect and has the advantage of easy realization, low complexity and robust convergence.

Journal ArticleDOI
TL;DR: A simple efficient high-quality instantaneous velocity estimation algorithm is developed in this paper, by using the position measurements only, based on the fact that numerical integration can provide more stable and accurate results than numerical differentiation in the presence of noise.
Abstract: High-quality low-speed motion control calls for precise position and velocity signals. However, velocity estimation based on simple numerical differentiation from only the position measurement may be very erroneous, especially in the low-speed regions. A simple efficient high-quality instantaneous velocity estimation algorithm is developed in this paper, by using the position measurements only. The proposed estimator is constructed based on the fact that numerical integration can provide more stable and accurate results than numerical differentiation in the presence of noise. The main attraction of the new algorithm is that it is very effective as far as in low-speed ranges, high robustness against noise, and easy implementation with simple computation. Both extensive simulations and experimental tests have been performed to verify the effectiveness and efficiency of the proposed approach

Journal ArticleDOI
TL;DR: A simple and effective implementation of the proposed self-adaptive contrast enhancement algorithm based on plateau histogram equalization for infrared images, including its threshold value calculation, is described by using pipeline and parallel computation architecture.

Journal ArticleDOI
Mi Zhao1, Zhiwu Li1
TL;DR: The approach is to make unmarked siphons satisfy cs-property when the elementary ones are properly supervised, the advantage of the novel method is that a much smaller number of supervisory monitors and arcs are added and unnecessary iterative processes are avoided.

Journal ArticleDOI
TL;DR: In this paper, a novel CPW-fed broadband circularly polarised square slot antenna is proposed and fabricated, which is fed by a widened L-type strip along the diagonal line of the square slot.
Abstract: A novel CPW-fed broadband circularly polarised square slot antenna is proposed and fabricated. The proposed antenna is fed by a CPW with a widened L-type strip along the diagonal line of the square slot. Measured results show that the 3 dB axial-ratio bandwidth of the proposed antenna can reach 17%. The proposed antenna has simple coplanar geometry which can be easily fabricated on inexpensive substrate.

Journal ArticleDOI
TL;DR: Two new descriptors, color distribution entropy (CDE) and improved CDE (I-CDE), which introduce entropy to describe the spatial information of colors, are presented and results show that CDE and I-Cde give better performance than SCH and geostat.

Journal ArticleDOI
TL;DR: A new classification algorithm, organizational coevolutionary algorithm for classification (OCEC), is proposed with the intrinsic properties of classification in mind, and its use of a bottom-up search mechanism shows that OCEC obtains a good scalability.
Abstract: Taking inspiration from the interacting process among organizations in human societies, a new classification algorithm, organizational coevolutionary algorithm for classification (OCEC), is proposed with the intrinsic properties of classification in mind. The main difference between OCEC and the available classification approaches based on evolutionary algorithms (EAs) is its use of a bottom-up search mechanism. OCEC causes the evolution of sets of examples, and at the end of the evolutionary process, extracts rules from these sets. These sets of examples form organizations. Because organizations are different from the individuals in traditional EAs, three evolutionary operators and a selection mechanism are devised for realizing the evolutionary operations performed on organizations. This method can avoid generating meaningless rules during the evolutionary process. An evolutionary method is also devised for determining the significance of each attribute, on the basis of which, the fitness function for organizations is defined. In experiments, the effectiveness of OCEC is first evaluated by multiplexer problems. Then, OCEC is compared with several well-known classification algorithms on 12 benchmarks from the UCI repository datasets and multiplexer problems. Moreover, OCEC is applied to a practical case, radar target recognition problems. All results show that OCEC achieves a higher predictive accuracy and a lower computational cost. Finally, the scalability of OCEC is studied on synthetic datasets. The number of training examples increases from 100 000 to 10 million, and the number of attributes increases from 9 to 400. The results show that OCEC obtains a good scalability.

Proceedings ArticleDOI
25 Jun 2006
TL;DR: Experiments show that the proposed algorithm works better in preserving the edge and texture information than wavelet transform method and Laplacian pyramid methods do in image fusion.
Abstract: A novel image fusion algorithm based on contourlet transform is introduced. Firstly, the problem that wavelet transform could not efficiently represent the singularity of linear/curve in image processing is analyzed. The principal of contourlet and its good performance in expressing the singularity of two or higher dimensional are studied. Secondly, the feasibility of image fusion using contourlet transform is discussed in detail. A new fusion method based on contourlet transform and the fusion framework are proposed. Finally, the transform coefficients structure and the fusion procedure are given in detail in this paper. The fusion rules for each kind of coefficients are chosen based on the parameter n level. Especially for the high frequency components, the fusion rule of choosing the greater of the region energy with the region consistency check is proposed. Experiments show that the proposed algorithm works better in preserving the edge and texture information than wavelet transform method and Laplacian pyramid methods do in image fusion

Journal ArticleDOI
01 Feb 2006
TL;DR: Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum, and the minimum conflict encoding is also proposed, to overcome the disadvantages of the general encoding methods.
Abstract: With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10/sup 4/ to 10/sup 7/, and has a linear time complexity. Even for 10/sup 7/-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.

Journal ArticleDOI
TL;DR: A new method that takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase.
Abstract: In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.

Journal ArticleDOI
TL;DR: In this paper, a simply locally resonant cavity cell (LRCC) model for EBG structures is presented to gain insight to the physical mechanism of the EBG structure and the interaction of electromagnetic waves with EBG.
Abstract: Mushroom-like electromagnetic band gap (EBG) structures exhibit unique electromagnetic properties that have found a wide range of electromagnetic device applications. This paper focuses on the local resonance behaviors of EBG structures, and a simply locally resonant cavity cell (LRCC) model for mushroom-like EBG structures is presented to gain insight to the physical mechanism of the EBG structures and the interaction of electromagnetic waves with EBG structures. The LRCC model proposes two kinds of main resonance modes: One is the mono-polarized mode that accurately predicts the position of the surface wave suppression bandgap, which is equivalent to the LC parallel resonance based on quasistatic assumption in principle, and the other is the cross-coupling polarized mode that exhibits some interesting resonant phenomena, which has not been observed previously. Parametric studies including the radius effect of the metal plated vias are effectively performed by using the LRCC model. Some numerical simulations and experiments of EBG structures, such as square, triangle, and hexagon lattices, are given to illustrate the applications and validation of LRCC model proposed by this paper.

Journal ArticleDOI
TL;DR: An optimization method, which combines the chaotic behavior of individual ants with the intelligent optimization action of an ant colony, is proposed, which is a deterministic process different from the conventional ant algorithm.
Abstract: Inspired by the behavior of the ants in nature, we propose an optimization method, which combines the chaotic behavior of individual ants with the intelligent optimization action of an ant colony. Our method includes both effects of chaotic dynamics and swarm-based search. It is a deterministic process different from the conventional ant algorithm. The nonlinear dynamics of the proposed method are analyzed, and we show how the algorithm, called chaotic ant swarm optimization, can be applied to numerical optimization problems with encouraging results.

Journal ArticleDOI
03 Apr 2006
TL;DR: A secure buyer-seller watermarking protocol without the assistance of a TTP is proposed in which there are only two participants, a seller and a buyer, which can trace piracy and protect the customer's rights.
Abstract: In the existing watermarking protocols, a trusted third party (TTP) is introduced to guarantee that a protocol is fair to both the seller and buyer in a digital content transaction. However, the TTP decreases the security and affects the protocol implementa- tion. To address this issue, in this article a secure buyer-seller watermarking protocol without the assistance of a TTP is proposed in which there are only two participants, a seller and a buyer. Based on the idea of sharing a secret, a watermark embedded in digital content to trace piracy is composed of two pieces of secret information, one produced by the seller and one by the buyer. Since neither knows the exact watermark, the buyer cannot remove the watermark from watermarked digital content, and at the same time the seller cannot fabricate piracy to frame an innocent buyer. In other words, the proposed protocol can trace piracy and protect the customer's rights. In addition, because no third party is introduced into the proposed protocol, the problem of a seller (or a buyer) colluding with a third party to cheat the buyer (or the seller), namely, the conspiracy problem, can be avoided.

Journal ArticleDOI
TL;DR: An improved adaptive predistortion method called 2D LUT (2-dimension look-up table) with different accuracy levels is presented to linearize HPAs with memory effects and implements excellent performance in mitigating the signal deterioration caused by memory effects.
Abstract: It is well known that HPAs (High Power Amplifiers) are inherently nonlinear devices and many researches have focused on the predistortion for memoryless HPAs. However, memory effects of HPAs can no longer be ignored when communication systems have wider bandwidth. Memoryless predistortion techniques proposed previously seldom have satisfactory effectiveness for typical wideband applications such as OFDM (Orthogonal Frequency Division Multiplexing) systems. In this paper, an improved adaptive predistortion method called 2D LUT (2-dimension look-up table) with different accuracy levels is presented to linearize HPAs with memory effects. Simulation and experimental results show that 2D LUT implements excellent performance in mitigating the signal deterioration caused by memory effects, both rectifies the signal constellation distortion and suppresses the spectrum emission. Large scale matrix computation is also avoidable in these adaptive algorithms, which makes them feasible when a real-time system is necessary.

Journal ArticleDOI
TL;DR: The possibility of taking only one single mode or several modes for each layer is shown to be useful in the study of the scattering characteristics of a multilayered sphere and in the measurement of the sizes and refractive indices of particles.
Abstract: We have derived the formula for the Debye-series decomposition for light scattering by a multilayered sphere. This formulism permits the mechanism of light scattering to be studied. An efficient algorithm is introduced that permits stable calculation for a large sphere with many layers. The formation of triple first-order rainbows by a three-layered sphere and single-order rainbows and the interference of different-order rainbows by a sphere with a gradient refractive index, are then studied by use of the Debye model and Mie calculation. The possibility of taking only one single mode or several modes for each layer is shown to be useful in the study of the scattering characteristics of a multilayered sphere and in the measurement of the sizes and refractive indices of particles.

Journal ArticleDOI
TL;DR: A novel algorithm, named FS-KFD, is developed to tune the scaling factors and regularization parameters for the feature-scaling kernel where each feature individually associates with a scaling factor.
Abstract: Kernel fisher discriminant analysis (KFD) is a successful approach to classification. It is well known that the key challenge in KFD lies in the selection of free parameters such as kernel parameters and regularization parameters. Here we focus on the feature-scaling kernel where each feature individually associates with a scaling factor. A novel algorithm, named FS-KFD, is developed to tune the scaling factors and regularization parameters for the feature-scaling kernel. The proposed algorithm is based on optimizing the smooth leave-one-out error via a gradient-descent method and has been demonstrated to be computationally feasible. FS-KFD is motivated by the following two fundamental facts: the leave-one-out error of KFD can be expressed in closed form and the step function can be approximated by a sigmoid function. Empirical comparisons on artificial and benchmark data sets suggest that FS-KFD improves KFD in terms of classification accuracy.

Jiao Licheng1
01 Jan 2006
TL;DR: This work analyzes the computation of GLCM by using Markov Chain property and proves that G LCM is independent of distance and angle when distance is large enough.
Abstract: The Gray Level Co-occurrence Matrix (GLCM) has been proved to be a promising method for image texture analysis.However,the parameters in computing the GLCM of an image can be selected from a wide range,which results in a large amount of computation to be needed and makes it difficult to analyze the image textures.To simplify the computation of GLCM,we analyze the computation of GLCM by using Markov Chain property.We prove that GLCM is independent of distance and angle when distance is large enough.According to our analysis,the computation of GLCM is simplified by reducing the number of selected values of distance and angle.Finally,we give simulation results on natural texture images and SAR images to validate the theoretical analysis.