scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Image and Graphics in 2004"


Journal Article
TL;DR: A watermarking algorithm based on singular value decomposition (SVD) is proposed, which is robust to common geometric distortions and proved rigorously that singular values of the watermarked image are invariant when it goes through the geometric distortions of transpose, mirror reflection, rotation, scale, and translation.
Abstract: Many digital watermarks now available for images are sensitive to geometric distortions, which particularly prevent blind detection of a public watermark. In this paper, a watermarking algorithm based on singular value decomposition (SVD) is proposed, which is robust to common geometric distortions. The watermark is embedded into image's singular values. By the algebraic properties of SVD, it is proved rigorously that singular values of the watermarked image are invariant when it goes through the geometric distortions of transpose, mirror reflection, rotation, scale, and translation. Experimental results show that the watermarking algorithm performs well in robustness. The embedded watermark can be detected reliably, following these geometric distortions, usual signal processing, or JPEG compression.

76 citations


Journal Article
HE Sai-xian1
TL;DR: An adaptive Canny algorithm of edge-detection method is proposed that not only keeps the Canny's excellent performance in good localization, only one response to a single edge and good detection, but also improves the performance in the detail edge- Detection andGood detection.
Abstract: This paper is based on Canny algorithm.An adaptive Canny algorithm of edge-detection method is proposed.This algorithm not only keeps the Canny's excellent performance in good localization,only one response to a single edge and good detection,but also improves the performance in the detail edge-detection and good detection. Canny adaptive algorithm divides image into sub-images and detects them with adaptive threshold value according to the whole image edge information, that improves the automaticity of edge-detection.With the mathematic analysis and test result,it is demonstrated that the adaptive edge-detection method is an efficient improving approach on edge-detection.

65 citations


Journal Article
TL;DR: With curvilinear regression analysis, it is proved that the result of the new approach is high correlation with MTF measured by optical instrument, in other words, the new method is sensitive to the change of image definition.
Abstract: Evaluation of definition for gray scale digital image is an important aspect of digital imaging system. Thus, in order to evaluate the definition of a gray scale image accurately and effectively, we present a new approach based on EVA method which, while retaining important features of existing method, overcomes some of their limitation. With curvilinear regression analysis we can prove that the result of the new approach is high correlation with MTF measured by optical instrument, in other words, the new method is sensitive to the change of image definition. When compared with traditional method, the result of analysis can also show that the new method is better than traditional method, such as entropy, to assess gray scale digital image definition. Experiments using hundreds of many kinds of gray scale digital image and the result of this new approach is well accurate to the change of definition of digital image. Form these we can also draw a conclusion that the new approach can be well applied to many kinds of gray scale digital image accurately and effectively.

64 citations


Journal Article
TL;DR: A new method for enhancing images taken in fogs does not need atmospheric model and just performs on the image, directly enhancing details of the scene in the image.
Abstract: According to the effect of fog from images, a new method for enhancing images taken in fogs is proposed in this paper It dose not need atmospheric model and just performs on the image, directly enhancing details of the scene in the image A moving mask is adopted to segment scene at different depth, then each pixel in the mask is processed using block-overlapped histogram equalization At the same time the influence of mask's size also is discussed in this paper Though the enhance operation will improve visual result, it also will bring noise amplification in sky region To restrain the adverse effect, the sky region should firstly be segmented before enhance operate Using optimal normal distribution to approach the characteristic of image gray distribution, then by the estimated distribution, the range of pixel value in the sky region can be got By the information, the sky region can be segmented from the image Experiments show that this algorithm can efficiently improve the degradation of image and enhance the clearness of image

29 citations


Journal Article
TL;DR: The primary content and state of art in Augmented reality, which combines virtual computer generated material with the surrounding physical world, registers real and virtual object with each other and runs interactively at real time is introduced.
Abstract: This paper presents a survey on the field of Augmented reality(AR), which combines virtual computer generated material with the surrounding physical world, registers real and virtual object with each other and runs interactively at real time Now it has become an important research field in VR and next generation of human computer interface This paper introduces the primary content and state of art in this field The key technologies, including basic tracking methods, display devices and registration processes, are discussed Many typical AR applications and developing tools are listed It describes the characteristic of AR system, with a detail analyzing of some technology difficult problems in AR system The corresponding solutions are also mentioned Currently research in AR field is largely focus on the AR system frameworks Which usually gives a unified interface that supporting heterogeneous AR display devices in different AR applications The AR frameworks also make the design of AR applications become more convenient Two most influential AR frameworks, Studierstube and DWARF, are introduced This survey provides a start point for anyone interested in AR research

28 citations


Journal Article
TL;DR: Experiments exhibit that SCBFNN possesses good ability to automatic target detection and possesses valid abilities to eliminating uncertainty and retaining target shape compared with conventional neural network methods.
Abstract: This paper proposes a structure-context based fuzzy neural network(SCFNN) approach for automatic target detection. Fuzzy neural network methods not only possess advantages as adaptivity, parallelism, robustness, ruggedness, and optimality, but integrate advantages as depicting and solving system uncertainty by knowledge and rules of fuzzy set theory. Accordingly, they are powerful tools for image processing and pattern recognition. Use fuzziness measures as objective function of neural network can depict uncertainty of pixels' category validly so as to optimize image classification by minimizing the objective function. Puting information constraint of structure context on neurons' weighting process can reduce loss of image information, especially, the rich information comprised by target edges, by which target's attributes such as profile and shape can be retained validly, and the false detection rate can also be improved prominently. Experiments on remotly sensed images of target are executed to validate SCFNN approach. The results exhibit that SCFNN possesses good ability to automatic target detection, simultaneously, possesses valid abilities to eliminating uncertainty and retaining target shape compared with conventional neural network methods.

24 citations


Journal Article
TL;DR: A novel image inpainting algorithm based on RBF (radial basis functions), which automatically detects contours of the mask and finds appropriate regions to construct the RBF to nicely fix the damaged image or remove specific objects.
Abstract: The goal of digital image inpainting is to restore damaged regions or remove objects in the image This paper presents a novel image inpainting algorithm based on RBF (radial basis functions) After the user selects the regions to be inpainted, the algorithm automatically detects contours of the mask and finds appropriate regions to construct the RBF Color of the 2D image is treated as height field over a regularly sampled grid, the 2D image inpainting problem is naturally converted to 3D implicit surface reconstruction problem, which RBF has been proved to be a good solver With RBF resampling,the algorithm can nicely fix the damaged image or remove specific objects Experiments show that our algorithm can fix a large variety of images effectively and robustly

18 citations


Journal Article
TL;DR: The comparison of precision and recall between texture based and color texture based image retrieval shows that synthesized features describe image content better than lonely one does.
Abstract: Retrieval and management of the vast quantitative image data needs efficient approaches of content based image retrieval. Current image retrieval methods in geographic image databases use only one kind of image features, which can not describe image content completely. In this paper, a remote sensing image retrieval approach using color and texture fused features is presented. A given image is decomposed using quin tree, and each subimage except the center one is separated into 5 sublevel subimages until the size of the subimage equals to or is great than 16×16 pixels. The energy values of the image are calculated via multi channel Gabor filter, and the mean values and standard deviations of each subimage are extracted as texture features. The color features are calculated too. Similarity between a given query image and database subimages, which sizes are approximately equal to the former, are measured using linear weighted distance of color and texture features. Then the top similar subimages are returned as query results. This approach is applied to retrieve high resolution remote sensing images from database, and its efficiency is confirmed by the experiments. The comparison of precision and recall between texture based and color texture based image retrieval shows that synthesized features describe image content better than lonely one does.

16 citations


Journal Article
TL;DR: Improvement for the existing BIR methods, development a new techniques, the research based on degradation model with non-linear characteristic, the removal noise, the real time algorithm and their applications are the challenges and further research tendency.
Abstract: When point spread function(PSF)is not known or only partially determined, restoration of the degraded images is called blind image restoration (BIR). In recent years, BIR algorithms (BIRA) have been studied widely. In this paper, BIRA is classified into three kinds: single channel BIRA with space-invariant PSF, multi-channel BIRA with space-invariant PSF and BIRA with space-variant PSF, according to the characteristic of PSF. The actual research status of BIRA is discussed in detail. Here, advantages and disadvantages of BIRA are also pointed out. After researching on the techniques, some results and conclusions are given as follow: improvement for the existing BIR methods, development a new techniques, the research based on degradation model with non-linear characteristic, the removal noise, the real time algorithm and their applications are the challenges and further research tendency.

16 citations


Journal Article
TL;DR: A novel image quality assessment according with perceptual property of human eye is proposed, and wavelet transform is used because it matches well with the multi-channel model of HVS, bandpass property of CSF(contrast sensitivity function) is integrated with, and the complexity of the computation is considered.
Abstract: Research on image quality assessment is meaningful for image processing projects. Since human being is the final receiver of the image, the key point of the image assessment is that it should match the characteristics of HVS(human visual system). In this paper, a novel image quality assessment according with perceptual property of human eye is proposed. In this algorithm, wavelet transform is used because it matches well with the multi-channel model of HVS, bandpass property of CSF(contrast sensitivity function) is integrated with, and the complexity of the computation is considered. The simulation results show that the correlation coefficient between the algorithm and subjective MOS(mean opinion score) is 0 95, but the correlation coefficient obtained by the PSNR(Peak signal noise ratio) measure is 0.81.

15 citations


Journal Article
TL;DR: The principle and methods of two fusion algorithms, SFIM (smoothing filter-based intensity modulation) and Gram-Schmidt (Gram-Sch Schmidt transform), were described, which showed there was no distinct difference in spatial details improved, however in terms of spectral information fidelity, both IHS and PC method were the worst, Gram- Schmidt method was better, while SFIM method was the best.
Abstract: There is a great application potential for the fusion of remote sensing images. With the development of quantitative remote sensing, not only improving spatial details but also preserving the spectral information of multispectral bands were required. The principle and methods of two fusion algorithms,SFIM (smoothing filter-based intensity modulation) and Gram-Schmidt (Gram-Schmidt transform), were described. In a case of IKONOS image in city, visual judgment, quantitative statistical parameters and graphs comparison were used to assess these two algorithms, which were also compared to the traditional methods of IHS transform and PC(principal component) transform. The results showed there was no distinct difference in spatial details improved. However in terms of spectral information fidelity, both IHS and PC method were the worst, Gram-Schmidt method was better, while SFIM method was the best.

Journal Article
TL;DR: In real application, the new method not only shows better performance than the MTM filter, but also is not affected by any threshold.
Abstract: Usually,there is guass noise and isolated noise in many nature images simultaneously.And it is difficult to getting rid of guass noise and isolated noise only by Median filter or Mean filter at the same time. In allusion to this question, Lee and Kassam proposed an improved algorithm of mean filter. Although they have maked an great improvement , the effect of MTM is still not ideal. It is a threshold that consumedly affect the effect of MTM.In this article ,we preposed an improved method of median.filer on the basis of analysing the character of MTM and traditional filter method. This new method applies auto adapted operators on the N×N area of every point in the processed image.For the different area of the image,the operators are different too.The election of operator weight mostly depends on the median of N×N area. The more the gray value be close to the median ,the more the weight of operator is strong. In real application, the new method not only shows better performance than the MTM filter,but also is not affected by any threshold.

Journal Article
TL;DR: A new fast fuzzy C-means(FCM) clustering without a priori information about the number of clusters for color image segmentation is proposed to solve the problem of heavy calculating burden and the disadvantage that clustering performance is affected by initial centers for FCM.
Abstract: A new fast fuzzy C-means(FCM) clustering without a priori information about the number of clusters for color image segmentation is proposed to solve the problem of heavy calculating burden and the disadvantage that clustering performance is affected by initial centers for FCM, which is simple and easy to implement in color image segmentation. It uses the hierarchical subtractive clustering(HSC), which could reduce the heavy computation load when clustering a large number of data points, to partition the image data into a certain number of subsets with similar color. For one thing, the centers of the subsets are used to initialize cluster centers; for another, centers of the subsets and the number of points in the neighborhood of centers are used in FCM. The computation speed of the fuzzy clustering algorithm is improved greatly because the number of color image data points used in fuzzy clustering is reduced notably and the computing load of HSC is much less than that of subtractive clustering. Furthermore, it can use the cluster validity index to find the number of clusters quickly. Experiments show that without changing the clustering function, the proposed approach has much faster computation speed than plain FCM algorithm and can segment the color image quickly and effectively.

Journal Article
TL;DR: A new content-based color image retrieval method, in which both the color content and the shape feature of the image have been taken into account, and the Guassian model is used to normalize the different sub-characters distance.
Abstract: The color histogram based image retrieval method is simple and invariant for translation and rotation of the images but losing the spatial information of the color. Recently many methods, such as accumulative histogram, color correlograms, local color histogram, etc, are introduced to improve the color histogram method. In this paper, a new content\|based color image retrieval method is proposed, in which both the color content and the shape feature of the image have been taken into account. Firstly, based on the special disposal on the HSV color space, an improved accumulative histogram of the hue is calculated as the color feature. To attain the spatial information, H-, S-, and V-component of the image are firstly divided into n×n blocks which are classified into 3 status, flatness, texture and edge status. Then each gray image is translated into a matrix composed of those 3 status values. After that the status matrix is transformed into 1\|dimension status sequence, the transition probability matrix of the sequence is calculated as the image's spatial distribution information. In matching the similarity of the images, the Guassian model is used to normalize the different sub\|characters distance. Experiments with different kinds of images indicate that this method is great effective in image's retrieval.

Journal Article
TL;DR: In this paper, the existed MPEG video encryption algorithms are classified into four types according to the relationship between encryption process and compression process, and each is evaluated from the four aspects: security, compression ratio, computing complexity and operationality.
Abstract: Video encryption is a suitable method to protect video data. In this paper, the existed MPEG video encryption algorithms are classified into four types according to the relationship between encryption process and compression process. They are complete encryption algorithm, partial encryption algorithm, DCT coefficient encryption algorithm and entropy encoding encryption algorithm. Each of them is evaluated from the four aspects: security, compression ratio, computing complexity and operationality. Theoretical analyses and experimental results are presented to compare these algorithms. And their application fields are given, which are consistent with their properties. According to the development direction of video application, the encryption algorithms combining with encoding process will be studied deeply in the future, such as DCT coefficient encryption algorithm, entropy encoding encryption algorithm or novel algorithm combining with error correcting code and so on.

Journal Article
TL;DR: A normalized Hough transform algorithm based on grey level is proposed that can make the detection result independent of the noise and the different lengths of different lines using the normalization technology and can extract ship's velocity automatically.
Abstract: As one of the important applications in Synthetic Aperture Radar(SAR) images, ship wake detection has been received considerable attentions in the area of marine remote sensing. Recently, most researches on ship wake detection depend on the mathematical tool of Radon transform. However, the result of Radon transform is the image with which wake character being enhanced but not the end points of the wakes. In order to calculate the ship's velocity, the aim of the research is to detect the end points of ship wakes. Found on the improvement of the conventional Hough transform, a normalized Hough transform algorithm based on grey level is proposed. It can make the detection result independent of the noise and the different lengths of different lines using the normalization technology. The steps of algorithm are presented. The expression of the wake's end points is derived. On the other hand, the calculation algorithm of ship's moving velocity is presented. Both the time complexity and space complexity of the normalized Hough transform and the conventional Hough transform are analyzed respectively. This algorithm is applied to the ship wake detection in SAR image. The experiments obtained good results and can extract ship's velocity automatically.

Journal Article
TL;DR: The experimental results de monstrate the effectiveness and the robustness of the visual navigation approach.
Abstract: Guidance using path following is widely applied in the field of autonomous mobile robots.Compared with then avigation system with out vision ,visual navigation has obviou sadvant agesasrichin for mation ,lowcost,quietness,innocuity ,etc .Thispa perdescribe sanavigation system which uses the visualin for mation provided by guide lines and colorsigns .In our approach ,the visual navigationis composed of three main modules:image preprocessing,path recognition and path tracking .First,image pre processing module for mulatescolor models of all kinds of objects ,and establi she seachobject’ssupport throug had aptive subsam pling based binarization and mathemati calmorphology .Second ,path recognition module detects the guidelines throug hanim proved Hough trans for malgorithm ,and the detected results including guide lines and colorsign sinteg rate the pathin for mation .Fi nally ,callingd if ferent function saccording to the movement of straight going or turning ,path tracking moduleprovidesre quired in put parameters to motor controller and steering controller.The experimental results de monstrate the effectiveness and the robustness of our approach .

Journal Article
TL;DR: A correlation tracking algorithm based on correlation coefficient, which overcomes the disadvantages of traditional correlation tracking based on point-to-point multiple with accumulation and has the advantages of good accuracy and high stability and has been applied to real-time object-tracking system.
Abstract: This paper presents a correlation tracking algorithm based on correlation coefficient, which overcomes the disadvantages of traditional correlation tracking based on point-to-point multiple with accumulation and has the advantages of good accuracy and high stability At the same time, many measures are put forward to improve the speed of the algorithm, which resolve the requirement of real-time of object tracking And during the object-tracking, there may be many changes in a sequence of the object images, therefore, a reasonable strategy of template updating will be the key of the object-tracking problem On the basis of the similarity measurement and template buffers, a suitable template updating strategy is given, which effectively decreases the accumulation of the object-tracking error, and greatly improves the stability of the object-tracking The failure judgement of object-tracking presented effectively resolves the transitory failure of tracking, caused by sudden changes, such as the changes between the dark and the bright, or the tracked object is covered temporarily The experiments show the solution decreases the complex of correlation tracking, and has the advantages of good accuracy and high speed as well Now this algorithm has been applied to real-time object-tracking system

Journal Article
TL;DR: A review of the methods of performance evaluation in content-based image retrieval proposed in the literatures and tries to figure out the developing direction in the future is presented and it is recommended that a standard test-bed for evaluating image retrieval effectiveness is established.
Abstract: Promoted by the professional and diverse demands of image retrieval, the technique of content-based image retrieval (CBIR) becomes more mature. More and more commercial and scientific research systems are developed. As any technique is promoted by the performance evaluation of corresponding research area, for the development of effective image retrieval applications it is imperative to study the standard of performance evaluation in content-based image retrieval. Problems such as a common image database for performance comparison and a means of getting correlation judgement for queries are explained. This paper presents a review of the methods of performance evaluation in content-based image retrieval proposed in the literatures and tries to figure out the developing direction in the future. This paper also recommends that the content-based retrieval research community should establish a standard test-bed for evaluating image retrieval effectiveness. Further work needs to be done to better involve users in the evaluation process because the ultimate aim is to measure the usefulness of a system for a user. Interactive performance evaluations including several levels of feedback and user interaction need to be developed.

Journal Article
TL;DR: In this article, an ordering threshold switching median filter is proposed to solve the contradiction between noise attenuation and image detail preservation, and the results indicated that the new method has better properties.
Abstract: This paper presents an ordering threshold switching median filter to solve the contradiction between noise attenuation and image detail preserving. From the ordering information of the pixels in the window, and based on extremum median filtering the image corrupted by impulse noise is divided into three pixel classes, that is, noise pixels, edges and details, and smooth regions. With the statistic of a lot of standard images tested, the parameters of the classifier are properly chosen in order to deal with most images adaptively. Then switching median filtering is applied with the classifier. Therefore the smooth regions and noise pixels are filtered by median filters that have a good noise removing capability, especially with the 'salt and pepper' noise. However, most of the edges and details of the image are untouched, so that the restored image can keep details even in variable magnitude impulse noise conditions. A comparison of median filter, extremum median filter and the method in this paper is provided both in subjective images and objective MAE and MSE data. Obviously, the results indicated that the new method has better properties.

Journal Article
TL;DR: The method presented can be applied in many cases and avoid the disadvantages of methods based on the mathematical model, and the experiment shows that the method can achieve a satisfied effect and remove the uneven illumination for digital aerial images effectively.
Abstract: The uneven illumination of aerial images will cause color or luminance difference in an image frame. The difference will further affect the production quality of digital aerial images and the further applications of them. This paper analyses the causes of the uneven illumination of aerial images and characters of them, and deeply studies the application of the MASK dodging in the aerial images. Then based on the MASK dodging principle, this paper presents the processing flow and the corresponding processing method aiming to remove the uneven illumination of the digital aerial images. Finally the experiment is given, which shows that the method presented by this paper can be applied in many cases and avoid the disadvantages of methods based on the mathematical model. The experiment also shows that the method can achieve a satisfied effect and remove the uneven illumination for digital aerial images effectively.

Journal Article
TL;DR: It is proved by some examples that the processing result of the method on reducing noise error of the measured data obtained through laser line scanning is effective and can meet the requirements of curve and surface reconstruction.
Abstract: Measured data are obtained through a laser scanner in reverse engineering. The real data inevitably contain unreasonably noise error during measuring. The noise error causes the reconstructed curve and surface rough. Therefore it is essential to remove the noise error. This paper investigates the method on reducing noise error of the measured data obtained through laser line scanning. The method on reducing noise error is closely related to the organization of the point cloud data. This paper analyzes the mathematical model about the point cloud data error. The noise error is mainly caused by random error. The characteristic of noise error is that the swing value is bigger and the peak arises on the scanning line. According to this feature, a method named the random filter algorithm is put forward for reducing noise error, and it is simple, quick and practical. The procedure of this algorithm is first to compare the relative position among the successive points. Then the points that their positions oscillate bigger are judged noise error according to a threshold and will be removed. The principle and the step are described in detail, and it is proved by some examples that the processing result of the method is effective and can meet the requirements of curve and surface reconstruction.

Journal Article
TL;DR: A detailed direction relations model (DDRM) is proposed, which can describe the information related to the interior of MBR of the reference object and the number of discernible spatial relations about line/region, region/region and point/region increase.
Abstract: Spatial relations have been comprehensively used in many applications, such as Spatial Database Query Language, Content-Based Image Retrieval and Similarity of Spatial Scenes, etc. However, because the conventional approaches of describing direction relations can not express direction information effectively when the target object is inside or intersected with minimum boundary rectangle (MBR) of the reference object, which limits the description of concepts related to spatial relations. In this paper, firstly, a detailed direction relations model (DDRM) is proposed, which can describe the information related to the interior of MBR of the reference object. The DDR includes interior, boundary and ring direction relations, and each direction relation is composed of 9 atom directions at most. The interior directions describe some concepts related to the interior of the reference object, The boundary directions deal with some terms related to boundary of the reference object, and the ring directions describe the information about the difference region between the reference object and its MBR. Secondly, after composing topological relations and DDRM, the number of discernible spatial relations about line/region, region/region and point/region increase. Unlike the current direction relations model, the DDRM is sensitive to the shape and the hole of a reference object, and is helpful to improve the ability of describing the spatial relations.

Journal Article
TL;DR: A novel approach for edge detection based upon the maximum fuzzy partition entropy principle, which has better performances than some classical edge detection methods based on gradient do.
Abstract: Image processing has to deal with much information of an image. Maximum entropy theorem of information theory is one of the useful tools to treat with this kind of information. Based upon the maximum fuzzy partition entropy principle, a novel approach for edge detection is presented. After the concept and the principle of the fuzzy probability and fuzzy partition are introduced briefly, a definition of fuzzy partition entropy is proposed. Using the relation of the probability partition and the fuzzy 2-partition of the image gradient, the algorithm is based on conditional probabilities and fuzzy partition. First, a gradient operator is performed and the gradient image is produced. Second, the problem of edge detection is to find a fuzzy partition of the gradient image, which is considered as being composed of edge region and smooth region, and the automatic optimal threshold is searched from gray-level histogram through maximizing the entropy of fuzzy partition. At last, an edge-enhancing procedure is executed on the edge image. The experiment is conducted on various test images and the results show that the proposed approach has better performances than some classical edge detection methods based on gradient do.

Journal Article
TL;DR: CKCA, a constructive kernel covering algorithm, is used to recognize characters of car plate which are sloped or fuzzy, and the result is satisfactory.
Abstract: The main character of constructive neural networks is to build a network step by step during processing a given data set, during the process the construction and parameters are discovered by learning and are not presented before learning Introducing kernel functions to non-linear transform, a support vector machine(SVM) transforms an input space into a high dimensional kernel space, then seeks the best linear classified plane in this new space The classified function is similar to a neural network formally A constructive kernel covering algorithm(CKCA) combines constructive learning methods of neural networks such as a covering algorithm with kernel function methods of SVM Firstly CKCA maps the input data set into a kernel space, and then classifies the data set by using a covering algorithm in this kernel space The CKCA method has the characteristic of low computation strong const ructive ability and visibility there fore, it is suitable to solve the problems such as a vase high dimensional data set classification and image recognition In this paper, CKCA is used to recognize characters of car plate which are sloped or fuzzy, and the result is satisfactory

Journal Article
TL;DR: This paper analyzed the characters of Human Visual System (HVS) at first, that included the visual nonlinear, multi-channel and masking effect as well, then constituted a kind of mathematic model to simulate the course that the human visual system proceeds the image information.
Abstract: With the technology of image compression and codec booming, and in order to control the transmission of their bit-stream, the term of image quality is given a kind of new degree and challenge. Today the research of Image Quality Assessment has also become one of the basis technologies in the realm of images and information. This paper analyzed the characters of Human Visual System (HVS) at first, that included the visual nonlinear, multi-channel and masking effect as well, then constituted a kind of mathematic model to simulate the course that the human visual system proceeds the image information. In the model, original images and distorted images are transformed respectively to the perceptual field, that is, both of them are processed by the basic visual nonlinear firstly, and with that enter a band pass filter created by the modulate transform function. During the processing of the multi-channel in the HVS model, according to the characteristic of masking, the function of band pass filter is divided into five bands by the varying spatial frequency of initial image. Finally the value comes into being that we gained from the vision pathways, and then the HVS-E (Error) is obtained. The result from experiments indicates the fine agreement between the HVS-E and the MOS (mean opinion score).

Journal Article
TL;DR: Wang et al. as discussed by the authors presented a survey of the up-to-date development of image engineering in China, to provide a convenient means of literature searching facility for readers working in related areas, and to supply a useful reference for the editors of journals and potential authors of papers.
Abstract: This is the ninth in the survey series of the yearly bibliographies on image engineering in China. The purpose of this survey work is mainly to capture the up-to-date development of image engineering in China, to provide a convenient means of literature searching facility for readers working in related areas, and to supply a useful reference for the editors of journals and potential authors of papers. Considering the wide distribution of related publications in China, 577 image engineering research and technique references are selected carefully from 2341 research papers published in a set of 15 Chinese journals. These 15 journals are considered as important journals in which papers concerning image engineering have higher quality and are relatively concentrated. Those selected references are classified first into 5 categories (image processing, image analysis, image understanding, technique application and survey), and then into 21 specialized classes according to their main contents. Some analysis and discussions about the statistics made on the classification results are also presented. This work shows a general and off-the-shelf picture of the various progresses of image engineering in China. In 2003, the number of research papers in image engineering had a considerable increase. Except "traditional" fields of image segmentation and image coding, new emerging research topics, such as digital image watermarking, human face and organ detection, image matching and information fusion, and image and video retrieval are keeping "hot". It should be pointed out particularly that the ratio of the number of research papers in image engineering over the number of research papers published in the above 15 journals attends a new stage in 2003. This has shown the tendency of fast progresses of image engineering in China.

Journal Article
TL;DR: An improved image template thinning algorithm is proposed in the paper based on a group of modified templates that can significantly reduce the scanning iterations and speed up the thinning process while giving the same or better thinning result.
Abstract: Image thinning is one of time consuming processes in image processing since it often needs several scanning iterations of the whole image data. Research efforts have been made on this issue to try to reduce scanning iterations for thinning algorithms. Image template thinning algorithms are parallel ones that have been widely applied in image processing with better performance than serial thinning methods. However, for some commonly used template thinning algorithms there are some shortages such as still needing quite a few iterations, incomplete thinning somewhere, etc. In order to overcome these shortages, an improved image template thinning algorithm is proposed in the paper based on a group of modified templates. The new algorithm can significantly reduce the scanning iterations and speed up the thinning process while giving the same or better (complete) thinning result. The new algorithm has been applied to image thinning of fingerprint images and the validity of the algorithm is confirmed by the experiment results in the paper.

Journal Article
TL;DR: A new matching method based on neural network on the basis of uniqueness, compatibility and similarity constraints, which reflects the constraint relations of all matching units of the same lines for stereo matching.
Abstract: Stereo matching is one of the classic difficult problems in the computer vision, and its complexity and precision hedge the capability of vision system to reconstruct the 3-D scene. This paper presents a new matching method based on neural network. On the condition that stereo rectification has been performed, the energy function is built on the basis of uniqueness, compatibility and similarity constraints, which reflects the constraint relations of all matching units of the same lines. It is then mapped onto a 2-D neural network for minimization, whose final stable state indicates the possible correspondence of the matching units. The depth map can be acquired through performing the above operation on the all epipolar lines. The algorithm has two traits relative to the traditional approach. 1. Individual pixel point but not scene point or edge line is adopted as matching unit and dense depth map could be obtained directly. 2. The external input of the nodes is not constant again and is the function of gray similarity of correspondent points. The experiments on the synthetic and real images demonstrate the feasibility of our approach.

Journal Article
TL;DR: The results of experimentation have proved that this method is more effective in the representation of rain in 3D terrain scene which also found to be realistic at the time of satisfying basic need of real time interactive navigation.
Abstract: During the course of present investigation,a method for simulating the rain in 3D terrain scene in the real time is proposed,using particle systems The applicable attributes have been analyzed with reference to the rain particle system and the rain particles, in accordance with the basic principles of particle systems Methodological approach includes the employment of feasible techniques or algorithms such as defining a cube outside the top of the view frustum as the generation shape of rain particles, using a point in pixels and a subsequent line as the shape of a rain particle,simulating the gravity influence in the course of rain particles' put down,replenishing continuously rain particles using particle groups and, testing the fall heights of rain particles in the real time etc The results of experimentation have proved that this method is more effective in the representation of rain in 3D terrain scene which also found to be realistic at the time of satisfying basic need of real time interactive navigation