scispace - formally typeset
Search or ask a question

Showing papers in "The Imaging Science Journal in 2011"


Journal ArticleDOI
TL;DR: This paper proposes an improved salient point detector based on wavelet transform; it can extract salient points in an image more accurately and is tested using a wide range of image samples from the Corel Image Library.
Abstract: A content-based image retrieval system normally returns the retrieval results according to the similarity between features extracted from the query image and candidate images. In certain circumstances, however, users may concern more about salient regions in an image of their interest and only wish to retrieve images containing the relevant salient regions while ignoring those irrelevant (such as the background or other regions and objects). Although how to represent the local image properties is still one of the most active research issues, much previous work on image retrieval does not examine salient regions in an image. In this paper, we propose an improved salient point detector based on wavelet transform; it can extract salient points in an image more accurately. Then salient points are segmented into different salient regions according to their spatial distribution. Colour moments and Gabor features of these different salient regions are computed and form a feature vector to index the image...

39 citations


Journal ArticleDOI
TL;DR: A new approach to detect and remove unwanted printed line inherited in the text image at any position without character distortion to avoid restoration stage is presented and is equally suitable to deal with line removal in printed and handwritten text written in any language circumvent restoration stage.
Abstract: In composite document image, handwritten and printed text is often found to be overlapped with printed lines. The problem becomes critical for obscure and broken lines at multiple positions. Consequently, line removal is unavoidable pre-processing stage in the development of robust object recognisers. Moreover, the restoration of the smash-up characters after removal of lines still persists to be a problem of interest. This paper presents a new approach to detect and remove unwanted printed line inherited in the text image at any position without character distortion to avoid restoration stage. The proposed technique is based on connected component analysis. Experiments are conducted using single line images that scanned and extracted manually from several documents and forms. It is demonstrated that our approach is equally suitable to deal with line removal in printed and handwritten text written in any language circumvent restoration stage. Promising results are reported in comparison with the other res...

36 citations


Journal ArticleDOI
TL;DR: A new way to determine the adjustable parameters is proposed and a modified Canny edge detection algorithm is constructed that can achieve the better edge detection results in most of the cases, and it is also useful for object boundary closing as a pre-segmentation step.
Abstract: The Canny edge detection algorithm contains a number of adjustable parameters, which can affect the computation time and effectiveness of the algorithm. To overcome the shortages, this paper proposes a new way to determine the adjustable parameters and constructs a modified Canny edge detection algorithm. In the algorithm, an image is firstly smoothed by an adaptive filter that is selected based on the properties of the image, instead of a fixed sized Gaussian filter, and then, the high and low thresholds for the gradient magnitude image are determined based on maximum cross-entropy between inter-classes and Bayesian judgment theory, without any manual operation; finally, if it needs, the object closing procedure is carried out. To test and evaluate the algorithm, a number of different images are tested and analysed, and the test results are discussed. The experiments show that the studied algorithm can achieve the better edge detection results in most of the cases, and it is also useful for object boundary closing as a pre-segmentation step.

18 citations


Journal ArticleDOI
C Qin, F Cao1, X P Zhang1
TL;DR: Wang et al. as discussed by the authors proposed an image inpainting algorithm based on adaptive edge-preserving propagation for structure repairing, where neighbor information is progressively propagated into damaged region, and the optimal size and location of the window containing damaged pixel are adaptively chosen according to the intact degree and colour distribution.
Abstract: We propose an image inpainting algorithm based on adaptive edge-preserving propagation for structure repairing. Neighbouring information is progressively propagated into damaged region. The optimal size and location of the window containing damaged pixel are adaptively chosen according to the intact degree and colour distribution. To preserve sharpness of edges, contributing weights of the pixels in neighbouring window are decided by their direction with isophote and distance with damaged pixels. Compared with typical partial differential equation (PDE)-based methods, the proposed approach is more concise and efficient, and can give satisfactory results for structural information repairing. Experiments are carried out to show effectiveness of the method.

15 citations


Journal ArticleDOI
TL;DR: A method based on mean shift algorithm to remove noise introduced by the imaging process while minimising loss of edges is proposed and experimental results show that the proposed algorithm provides better results than the traditional focus measures in the presence of the above mentioned two types of noise.
Abstract: The technique to estimate the three-dimensional (3D) geometry of an object from a sequence of images obtained at different focus settings is called shape from focus (SFF). In SFF, the measure of focus — sharpness — is the crucial part for final 3D shape estimation. However, it is difficult to compute accurate and precise focus value because of the noise presence during the image acquisition by imaging system. Various noise filters can be employed to tackle this problem, but they also remove the sharpness information in addition to the noise. In this paper, we propose a method based on mean shift algorithm to remove noise introduced by the imaging process while minimising loss of edges. We test the algorithm in the presence of Gaussian noise and impulse noise. Experimental results show that the proposed algorithm based on the mean shift algorithm provides better results than the traditional focus measures in the presence of the above mentioned two types of noise.

14 citations


Journal ArticleDOI
TL;DR: A capacity‐enhanced reversible data hiding scheme based on side match vector quantisation (SMVQ), where the hiding payload size of secret data of the proposed scheme is more than those of both Chang and Wu’s and Chang et al.
Abstract: In recent years, there are many reversible data hiding schemes proposed. Most schemes are guaranteed that the original cover image can be reconstructed completely. When the secret data are hidden in the compression image, the receiver need to extract the secret data, reconstruct the original cover image and compress the image to save the space. In 2006, Chang et al. proposed a reversible data hiding scheme based on side match vector quantisation (SMVQ). Their method can extract the secret data and reconstruct the SMVQ‐compressed cover image. In this paper, we proposed a capacity‐enhanced reversible data hiding scheme. The hiding payload size of secret data of our proposed scheme is more than those of both Chang and Wu’s and Chang et al.’s schemes for VQ‐ and SMVQ‐based compressed images.

13 citations


Journal ArticleDOI
TL;DR: The use of several spatial-domain focus metrics frequently applied in non-coherent (classical) imaging systems are explored, and their suitability for the automated numerical focusing of marine holograms such as those collected with eHoloCam, the underwater digital holographic camera developed at Aberdeen University.
Abstract: In this article, we will explore the use of several spatial-domain focus metrics frequently applied in non-coherent (classical) imaging systems, and evaluate their suitability for the automated numerical focusing of marine holograms such as those collected with eHoloCam, the underwater digital holographic camera developed at Aberdeen University. Metrics will be applied to a set of particle holograms chosen to represent a range of features within the three categories: opacity, edge roughness and distance from image sensor.

13 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed scheme is superior to other reversible schemes in terms of both image quality and embedding capacity.
Abstract: This work presents a novel histogram-based reversible data hiding scheme. Although common histogram-based reversible data hiding schemes can achieve high image quality, embedding capacity is restricted because general images usually do not contain a great number of pixels with the same pixel values. To improve embedding capacity and retain low distortion, the proposed scheme uses prediction-error values, which are derived from the difference between an original image and a predictive image, instead of using the original pixels to convey a secret message. In the proposed scheme, a predictive image is generated using the mean interpolation prediction method. Since the obtained predictive image is very similar to the original image, prediction-error values are to be tended to zero. That is, a great quantity of peak points gathers around zero. The proposed scheme takes full advantage of this property to increase embedding capacity and retain slight distortion. Moreover, a threshold is used to balance ...

10 citations


Journal ArticleDOI
TL;DR: In this paper, a method of morphological image processing is proposed for laser speckle image processing based on the theory of mathematical morphology and some basic morphological operations, such as opening, closing, skeletonisation and deburring.
Abstract: In the speckle metrology, the processing of speckle image is very important. A method of morphological image processing is proposed in this paper. The theory of mathematical morphology and some basic morphological operations are introduced. In the experiment of laser speckle double-exposure, we get the specklegram which contains the information of micro-displacement. Then we employ the device of point-to-point analysis and get the image of laser speckle pattern interference fringes. Using the theory of speckle, the formula of micro-displacement is obtained. In the image processing, we first use grey-level transformation and histogram equalisation. Then we use the method of dynamic local threshold to get the binary image of laser speckle pattern interference fringes. In order to extract the fringe spacing, the image is processed by the operations of mathematical morphology in this work, such as the operations of opening, closing, skeletonisation and deburring. After that, the image can be used to e...

7 citations


Journal ArticleDOI
TL;DR: Various objective and subjective comparisons show the superior performance of the proposed registration method based on contribution of structural similarity measurement to the well known Lucas–Kanade algorithm.
Abstract: It is commonly known that the mean square error (MSE) does not accurately reflect the subjective image quality for most video enhancement tasks. Among the various image quality metrics, structural similarity (SSIM) metric provides remarkably good prediction of the subjective scores. In this paper, a new registration method based on contribution of structural similarity measurement to the well known Lucas–Kanade (LK) algorithm has been proposed. The core of the proposed method is contributing the SSIM in the sum of squared difference of images along with the Levenberg–Marquardt optimisation approach in LK algorithm. Mathematical derivation of the proposed method, based on the unified framework of Baker et al., is given. The proposed registration algorithm is applied to a video enhancement successfully. Various objective and subjective comparisons show the superior performance of the proposed method.

6 citations


Journal ArticleDOI
TL;DR: The features and requirements for three-dimensional (3D) holographic displays are revised, the practical use of holographic interferometry as an effective tool for non-destructive inspection of museum items are examined, holographic methods for high-density data storage are discussed and how 3D images can be integrated into a large coverage area are outlined.
Abstract: This paper discusses some aspects of holographic display techniques pertinent for museum applications. In particular, it revises the features and requirements for three-dimensional (3D) holographic displays, examines the practical use of holographic interferometry as an effective tool for non-destructive inspection of museum items, discusses holographic methods for high-density data storage and outlines how 3D images can be integrated into a large coverage area.

Journal ArticleDOI
TL;DR: A comprehensive comparative analysis of the performance of locality preserving projections (LPPs)‐ based Laplacianfaces, which is a recently introduced algorithm with the more traditional, principal component analysis (PCA)‐based Eigenfaces, to provide best combination of selected parameters to extract the best results.
Abstract: This paper provides a comprehensive comparative analysis of the performance of locality preserving projections (LPPs)‐based Laplacianfaces, which is a recently introduced algorithm with the more traditional, principal component analysis (PCA)‐based Eigenfaces All possible combinations of neighbourhood defining distance metrics, classifier distance metrics and number of retained eigenvectors have been tried on different imaging environments The FERET facial database was chosen which provides enough diversity in illumination, facial expressions and aging CsuFaceIdEval, an open source platform, is used for this comparison and recognition rates are studied in detail As a result of our detailed analysis, we provide best combination of selected parameters to extract the best results from these two algorithms

Journal ArticleDOI
TL;DR: Experimental results obtained for several watermarked medical images indicate imperceptibility of the approach even for high payloads and the proposed approach overcomes other related works introduced in the recent literatures.
Abstract: This paper introduces a reversible and scalable blind watermarking method for medical images based on histogram shifting in wavelet domain. The histogram shifting-based watermarking especially in spatial domain suffers from the overhead of positions information that has to be embedded. This not only has negative impact on the capacity of the embedded data, but also reduces the quality of the watermarked image. To overcome this problem, a new histogram shifting approach in wavelet domain is introduced which adaptively manages the parts of the required histogram to be shifted. For inserting watermark data, two thresholds, T1 and T2, are determined in the high-frequency sub-bands of the histogram based on the size of watermark data. Two zero-points, Z1 and Z2, are created in the histogram shifting. In the histogram shifting procedure, only small parts of the histogram have been shifted and therefore, introduced distortion has been considerably decreased and the quality of the watermarked image has been increased. Applications of the method were examined for the multiple medical image watermarking. Two sets of data, one as copyright watermark and the other as caption watermark which contains patients' information, are first encrypted and then inserted in the high-frequency sub-bands in the integer wavelet domain. Inserting the data in different sub- bands besides encryption of it by a private key provides high security for the embedded data. Experimental results obtained for several watermarked medical images indicate imperceptibility of the approach even for high payloads. The proposed approach overcomes other related works introduced in the recent literatures.

Journal ArticleDOI
TL;DR: An overview of the colour holographic recording technique is presented in this paper, including the current status of colour holography based on Denisyuk's single-beam technique including the rendition of colour in a hologram.
Abstract: An overview of the colour holographic recording technique is presented. Colour holography is the most accurate imaging technology known to science. It is now possible to produce three-dimensional (3D) holographic images for display that are almost indistinguishable from the original object or scene. The current status of colour holography based on Denisyuk’s single-beam technique is presented including the rendition of colour in a hologram. The demand on the recording materials for such holograms is explained. The applications of display holography have increased when it is now possible to record artefacts in full colour for museums and for other display purposes, e.g. in advertising, art and documentation. The major advantages of holographic reproduction are discussed together with its limitations.

Journal ArticleDOI
TL;DR: A rate control algorithm for MVC based on the quadratic rate-distortion (R–D) model is proposed that achieves up to 0·25 dB improvements in peak signal-to-noise ratio (PSNR) and also improves the estimation accuracy of the mean absolute difference (MAD).
Abstract: Since the current multi-view video coding (MVC) software does not contain any rate control technique, this paper proposes a rate control algorithm for MVC based on the quadratic rate-distortion (R–D) model. The proposed algorithm classifies each picture into six frame types based on the relation between disparity prediction and temporal prediction estimation and also improves the estimation accuracy of the mean absolute difference (MAD). The proposed method allocates the bits and controls the rate for inter-view, frame layer and basic unit layer based on the analysis of the previously coded information. Compared to the multi-view video coding with fixed quantisation parameter, the proposed scheme achieves up to 0·25 dB improvements in peak signal-to-noise ratio (PSNR). Meanwhile, it can efficiently control the bit-rate with an average rate control error of 0·54%.

Journal ArticleDOI
TL;DR: Two independent multi-step methods for automatic segmentation of the hip femoral and acetabular cartilages, femur and pelvis bones from CT images are presented and were effective in the presence of actual in vivo hip CT data.
Abstract: In this research, two independent multi-step methods for automatic segmentation of the hip femoral and acetabular cartilages, femur and pelvis bones from CT images are presented. In data acquisition, by injecting the contrast media in the hip joint, the hip articular space is enhanced in CT images. The hip bones and cartilages are then extracted based on available anatomical assumptions, employing quantitative measures and techniques such as radial differentiation and image bottom hat (IBH) as well as proposing several heuristic techniques. After segmentation, applying a marching cube surface rendering technique, three-dimensional visualisation of segmented cartilages and bones followed by thickness map estimation of the hip cartilages is performed. Manual segmentations of experts were employed as gold standard for evaluating the results. The proposed techniques were effective in the presence of 20 sets (5120 images) of actual in vivo hip CT data.

Journal ArticleDOI
TL;DR: More natural looking stego images of 43 dB peak signal-to-noise-ratio (PSNR) are generated by the proposed method exceeding Wu et al.
Abstract: Secret image sharing is a technique to share a secret image among n participants. Each participant has a meaningless, noise-like share. The image is revealed if any k of the shares are gathered. This scheme uses the polynomial based (k, n) secret sharing approach proposed by Shamir in 1979. In 2004, Lin and Tsai proposed a new secret image sharing method with steganography. Their scheme uses steganography to hide the shares into cover images. After this pioneering research, Yang et al. proposed a technique with enhanced stego image quality and better authentication ability in 2007. Wu et al. proposed another method to both decrease size expansion ratio of stego images and increase stego image quality by 0·5 dB compared to Yang et al.’s method in 2009. A new method with better authentication ability and stego image quality is proposed in this manuscript. More natural looking stego images of 43 dB peak signal-to-noise-ratio (PSNR) are generated by the proposed method exceeding Wu et al.’s method by 1·2 dB o...

Journal ArticleDOI
TL;DR: A speed estimation method based on digital image processing of pictures taken with wireless cameras installed on top of existing traffic lights that is accurate for this application, and adding the advantage of low cost equipment and easy installation results in a very attractive solution.
Abstract: Most severe car accidents that occur in urban environments involve side impacts at street intersections, even at those regulated with traffic lights. Hence, it is very common to implement a small delay since one road changes to red until the other road changes to green. This delay is intended to avoid accidents in which a vehicle decides to go through the intersection after the sequence green–yellow–red is started, underestimating the time required to overtake the intersection. A better approach is to adjust the delay dynamically, depending on the speed of the vehicles approaching to the intersection. Using the dynamic approach, it is possible to improve traffic flow by reducing unnecessary delays, and to improve safety by applying longer delays when needed. This paper proposes a speed estimation method based on digital image processing of pictures taken with wireless cameras installed on top of existing traffic lights. The algorithm finds a vehicle in two consecutive images (either in day or night condit...

Journal ArticleDOI
TL;DR: A face verification technique based on half the lower face using the minimum average correlation energy (MACE) filter which has the ability to satisfy the correlation peak at the origin, and minimise theaverage correlation energy is proposed.
Abstract: Correlation filters have been applied successfully in pattern matching needed in face verification. In this paper, we propose a face verification technique based on half the lower face using the minimum average correlation energy (MACE) filter which has the ability to satisfy the correlation peak at the origin, and minimise the average correlation energy. Our experiments show that the application of the MACE filter for face verification using the lower‐left or ‐right parts of the face generates authentication with at least 80% accuracy. Authentication accuracy using the MACE filter based on the whole face increases to at least 88%, but requires approximately twice as much response time.

Journal ArticleDOI
TL;DR: Two algorithms for modified BTC (MBTC) are proposed for reducing the bitrate below 2 bpp and it is found that the reconstructed images obtained using the proposed algorithms yield better results.
Abstract: Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two‐level quantiser that adapts to local properties of the image while preserving the first‐ or first‐ and second‐order statistical moments. The parameters transmitter or stored in the BTC algorithm are statistical moments and bitplane yielding good quality images at a bitrate of 2 bits per pixel (bpp). In this paper, two algorithms for modified BTC (MBTC) are proposed for reducing the bitrate below 2 bpp. The principal used in the proposed algorithms is to use the ratio of moments which is a smaller value when compared to absolute moments. The ratio values are then entropy coded. The bitplane is also coded to remove the correlation among the bits. The proposed algorithms are compared with MBTC and the algorithms obtained by combining JPEG standard with MBTC in terms of bitrate, peak signal‐to‐noise ratio (PSNR) and subjective qua...

Journal ArticleDOI
TL;DR: The results show that the colorimetric properties of colour samples are more important than the number of samples and proves that there is a benefit of developing the custom target tailored to the specific need.
Abstract: When characterising digital camera for capturing art paintings, the standard targets are often used, resulting in unacceptable differences in reproducing some special colours, so additional time must be spent for visual editing and colour adjusting in various software applications. To avoid the need for excessive colour editing, and to improve workflow efficiency and colour accuracy, the special colour reference target for digital camera characterisation was developed, applied and tested in these research. The object of digitalisation was paintings made with gouache paint, so two custom gouache reference targets with different numbers of colour samples (named CGRT24 and CGRT96) were created for digital camera colour characterisation. The main criteria for construction of targets were defined, as the optimal studio conditions for digital image capture of art paintings. By analysing the results gained with profiles testing, it was established that the errors in colour reproduction, which appear when...

Journal ArticleDOI
TL;DR: The PhaseCamTM as mentioned in this paper is a wavefront sensor/recorder that can record and reconstruct the complete complex optical wave function of a coherent light beam, which can be used in aero-optical tests of imaging and energy projection systems.
Abstract: The field of aero-optics deals with the measurement, understanding and correction of the affects of aerodynamics on an optical system. This paper reports on research that resulted in a number of unique diagnostics concepts and methods that can enhance aero-optical tests of imaging and energy projection systems and provide all of the information that is needed to fully characterise the aero-optical properties of a system. Some of these were fielded, tested and proven to be useful; others are still in the concept stage. The fundamental concept is built around a wavefront sensor/recorder, known as the PhaseCamTM, which can record and reconstruct the complete complex optical wave function of a coherent light beam. This paper describes the concepts and methods including applications of optical systems three wind tunnel tests, including Air Force Research Laboratory/Air Vehicles (AFRL/RB) Trisonics Gasdynamics Facility (TGF) DEBI-FX tests, Subsonic Aerodynamics Research (SORL) turret tests and Ohio Stat...

Journal ArticleDOI
TL;DR: The experimental results show that the proposed scheme can suggest an efficient way to hide the image information, and has the advantages of large key space, sensitive to initial conditions and zero correlation between adjacent pixels in cipher-image.
Abstract: In this paper, a modelled image encryption scheme is presented, of which generalised circulant matrix is employed together with dynamical chaotic system. The strong correlations among adjacent pixels in plain-image can be declined greatly by the proposed scheme. Meanwhile, the diffusion transform is considered simultaneously to avoid statistical analysis, known plaintext and chosen plaintext attack. The experimental results show that the proposed scheme can suggest an efficient way to hide the image information, and has the advantages of large key space, sensitive to initial conditions and zero correlation between adjacent pixels in cipher-image. Moreover, the grey values in the cipher-image are distributed symmetrically. The proposed image scheme can be applied to practical image information transmission and protection in the public.

Journal ArticleDOI
TL;DR: In this article, a generalised multi-scale model of considering the inaccuracy of the capacitance data and reconstruction model is proposed, in which the original inverse problem is decomposed into a sequence of inverse problems based on the scale variables and then solved successively from the largest scale to the smallest scale until the solution of the original problem is found.
Abstract: Successful applications of electrical capacitance tomography (ECT) depend mainly on the precision and speed of the image reconstruction algorithms. In this paper, based on the wavelet multi-scale analysis method, a generalised multi-scale model of considering the inaccuracy of the capacitance data and reconstruction model is proposed, in which the original inverse problem is decomposed into a sequence of inverse problems based on the scale variables and then solved successively from the largest scale to the smallest scale until the solution of the original inverse problem is found. A generalised multi-scale objective functional, which has been developed using the least trimmed squares (LTS) estimation and theM-estimation, is proposed. This objective functional unifies the regularised LTS estimation, the regularised M-estimations, the regularised least squares (LS) estimation, the regularised combinational estimation of the LTS estimation and the M-estimations, the regularised combinational estimation of the LS estimation and the M-estimations into a concise formulation. An efficient solver, which integrates the beneficial advantages of the homotopy algorithm, the harmony search (HS) algorithm that has been developed using the multi-harmony techniques based on the cooperation of solutions, and the particle collision (PC) algorithm, is designed for searching a possible global optimal solution. The proposed algorithm is tested by six typical reconstruction objects using a 12-electrode square sensor. Numerical results show the efficiency and superiority of the proposed algorithm in solving ECT image reconstruction problem. In the cases considered in this paper, good results that show great improvement in the spatial resolution and accuracy are observed. The spatial resolution of the reconstructed images by the proposed algorithm is enhanced and the artefacts in the reconstructed images can be eliminated effectively. Meanwhile, the reconstructed results derived from the noise-contaminated capacitance data indicate that the proposed algorithm is successful in dealing with the inaccurate property in the capacitance data.

Journal ArticleDOI
TL;DR: A new segmenting method is presented in this paper, which combines edge and region processing technologies, and takes multi-scale morphological gradient images of the original images, and then watershed algorithm is used to segment them so that tongue-coating images are separated.
Abstract: The segmentation of tongue-coating images is very important to computer-aided traditional Chinese medicine (TCM) clinic diagnosis system. By now, algorithms for this segmentation are in low-accuracy or high-complexity. Upon the edge-gradient feature of tongue-coating images, a new segmenting method is presented in this paper, which combines edge and region processing technologies. With the new method, the multi-scale morphological gradient images of the original images are taken firstly, and then watershed algorithm is used to segment them so that tongue-coating images are separated. The following experimental results show this method's effectiveness and efficiency.

Journal ArticleDOI
TL;DR: A novel approach to accelerate the pixel mapping stage by utilisation of the spatial redundancy of pixels in the image and the inherent topological preservation nature of the resulting codebook, which outperforms ordinary solutions and is comparable to state-of-the-art solutions in terms of execution time.
Abstract: SOM-based image quantisation requires a considerable amount of processing time even during the pixel mapping stage. Basically, a full search algorithm is employed to find a codeword, within a codebook, whose distance to the queried pixel is minimum. In this paper, we present a novel approach to accelerate the pixel mapping stage by utilisation of the spatial redundancy of pixels in the image and the inherent topological preservation nature of the resulting codebook. The experimental results confirm that the proposed approach outperforms ordinary solutions and is comparable to state-of-the-art solutions in terms of execution time. In addition, as the proposed approach does not require codebook sorting and a complex data structure with variable sizes, this simplifies its implementation and makes it feasible for hardware realisation.

Journal ArticleDOI
TL;DR: A normalised mutual information based B-spline registration algorithm is used to iteratively refine the registration of soft tissues, and at the same time keep the rigid transformation for each bony structure.
Abstract: A hybrid rigid and non-rigid registration algorithm has been presented to register thoracic and abdominal CT images of the same subject scanned at different times. The bony structures are first segmented from two different time CT images, respectively. Then, the segmented bony structures in the two respective images are registered based on their boundary points using a soft correspondence matching algorithm, with a rigid transformation constraint on each bony structure. With estimated correspondences in bony structures, the dense deformations in the entire images are interpolated by a thin plate spline (TPS) interpolation technique. To improve the alignment of soft tissues in the images as well, a normalised mutual information based B-spline registration algorithm is used to iteratively refine the registration of soft tissues, and at the same time keep the rigid transformation for each bony structure. This registration refinement procedure is repeated until the algorithm converges. The proposed hybrid registration algorithm has been applied to the clinical data with very encouraging results as measured by two clinical radiologists.

Journal ArticleDOI
L-C Jin1, W-G Wan1, X-Q Yu1
TL;DR: In this article, the authors proposed a variational scheme for 3D reconstruction from point cloud using estimated topological surface, which represents a continuum of surface reconstruction solutions of a given non-integrable gradient field.
Abstract: In this paper, we propose a novel variational scheme for three-dimensional (3D) reconstruction from point cloud using estimated topological surface. The scheme using estimated topological surface recovers the underlying 3D shape by integrating an estimated surface topology gradient field. The estimated gradient is usually non-integrable due to the presence of noise and outliers in the estimation process and inherent ambiguities. The method represents a continuum of surface reconstruction solutions of a given non-integrable gradient field. For an N6N point cloud, the subspace of all integrable gradient fields is of dimension N 2 21. It can be applied to derive a range of meaningful surface reconstructions from this high dimensional space. We show that by using a progression of spatially varying anisotropic weights, significant improvements in surface reconstruction from point cloud can be achieved. Simulated surfaces are experimentally studied, and the results validate that the proposed approaches improve the reconstruction. The proposed method improves the reconstructing results significantly for the simulated data.

Journal ArticleDOI
TL;DR: A robust and adaptive method to automatically detect and correct red‐eye effect in digital photographs by introducing a novel process of tuning eye candidate points which is followed by robust iris pair selection among the tuned candidates.
Abstract: In this paper, we describe a robust and adaptive method to automatically detect and correct red‐eye effect in digital photographs. It improves the existing iris pair detection approaches by introducing a novel process of tuning eye candidate points which is followed by robust iris pair selection among the tuned candidates. Finally, a novel and highly effective red‐eye correction process is applied to the detected iris regions. The red‐eye correction scheme is adaptive to the severity of redness and results in high correction rate and improved visual appearance. The performance of the proposed method is compared with two existing automatic red‐eye correction methods and exhibits considerable performance gains. Additionally, the performance of eye detection part of the algorithm is separately evaluated on three well‐known images databases. The results have shown that the method is extremely robust in detection and correction of red‐eye artefact. The proposed method is designed to correct images without human intervention as the entire process from face detection to red‐eye correction is fully automated.

Journal ArticleDOI
TL;DR: A scheme to store multi-exposure copies of 2DGE images in a high dynamic range vector quantisation (HDRVQ) compressed file is proposed and it is shown that 2D GE images can be retrieved from the HDRVQ compressed file for any exposure between the starting and ending exposures.
Abstract: Currently, retrieving information from photographed two-dimensional gel electrophoresis (2DGE) images are sometimes insufficient for continuing pathogenic investigation. In this paper, we proposed a scheme to store multi-exposure copies of 2DGE images in a high dynamic range vector quantisation (HDRVQ) compressed file. 2DGE images can be retrieved for different exposures or a new exposure not of the original image. The high dynamic range concept was used to compost a detailed 2DGE image from neighbouring images. Results showed that 2DGE images can be retrieved from the HDRVQ compressed file for any exposure between the starting and ending exposures. Furthermore, the retrieved image of a new exposure can also display protein details that were not of the original images. The results of this research may be used by biologist to decide whether a new batch of gel culture was needed for further pathogenic investigation. A new batch is not only expensive but also very time consuming.