scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Image and Signal Processing and Analysis in 2005"


Proceedings ArticleDOI
24 Oct 2005
TL;DR: It is first shown using simple arguments that the so-called residual and stratified methods do yield an improvement over the basic multinomial resampling approach, and a central limit theorem is established for the case where resamplings is performed using the residual approach.
Abstract: This contribution is devoted to the comparison of various resampling approaches that have been proposed in the literature on particle filtering. It is first shown using simple arguments that the so-called residual and stratified methods do yield an improvement over the basic multinomial resampling approach. A simple counter-example showing that this property does not hold true for systematic resampling is given. Finally, some results on the large-sample behavior of the simple bootstrap filter algorithm are given. In particular, a central limit theorem is established for the case where resampling is performed using the residual approach.

692 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: Thirty local geometrical features extracted from 3D hitman face surfaces have been used to model the face for face recognition, with the most discriminating ones selected from a set of 86.
Abstract: Thirty local geometrical features extracted from 3D hitman face surfaces have been used to model the face for face recognition. They are the most discriminating ones selected from a set of 86. We have experimented with 420 3D-facial meshes (without texture) of 60 individuals. There are 7 images per subject including views presenting fight rotations and facial expressions. The HK algorithm, based in the signs of the mean and Gaussian curvatures, has been used for region segmentation. Experiments under controlled and non-controlled acquisition conditions, considering pose variations and facial expressions, have been achieved to analyze the robustness of the selected characteristics. Success recognition results of 82.0% and 90.16% were obtained when the images are frontal views with neutral expression using PCA and SVM, respectively. The recognition rates only decrease to 76.2% and 77.9% using PCA and SVM matching schemes respectively, under gesture and light face rotation.

77 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: This paper presents an enhanced approach for fingerprint segmentation based on the response of eight oriented Gabor filters that has been evaluated in terms of decision error trade-off curves of an overall verification system.
Abstract: An important step in fingerprint recognition is the segmentation of the region of interest. In this paper, we present an enhanced approach for fingerprint segmentation based on the response of eight oriented Gabor filters. The performance of the algorithm has been evaluated in terms of decision error trade-off curves of an overall verification system. Experimental results demonstrate the robustness of the proposed method.

55 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: The system tries to improve the verification results of unimodal biometric systems based on palmprint or facial features by integrating them using fusion at the matching-score level by improving the equal error rate and minimum total error rate.
Abstract: This paper presents a bimodal biometric verification system for physical access control based on the features of the palmprint and the face. The system tries to improve the verification results of unimodal biometric systems based on palmprint or facial features by integrating them using fusion at the matching-score level. The verification process consists of image acquisition using a scanner and a camera, palmprint recognition based on the principal lines, face recognition with eigenfaces, fusion of the unimodal results at the matching-score level, and finally, a decision based on thresholding. The experimental results show that fusion improves the equal error rate by 0.74% and the minimum total error rate by 1.72%.

44 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: A novel face image validation system that performs face detection in order to find facial features and determine image background and compares it to the requirements of International Civil Aviation Organization proposals for machine readable travel documents.
Abstract: In this paper, we present a novel face image validation system. The purpose of the system is to evaluate quality of face images for identification documents and to detect face images that do not satisfy the image quality requirements. To determine image quality the system first performs face detection in order to find facial features and determine image background. The system consists of seventeen separate tests. Each test checks one quality aspect of the face or of the whole image and compares it to the requirements of International Civil Aviation Organization (ICAO) proposals for machine readable travel documents. The requirements are designed to ensure good conditions for automatic face recognition. The tests are organized in a hierarchical way so the low-level tests are executed first and the high-level tests are executed last. The result of a test is a fuzzy value representing a measure of the image quality. Each test has a set of parameters that can be tuned to produce desired performance of the test. Initial testing of the system has been performed on the set of 190 face images and has demonstrated the feasibility of the method.

41 citations


Proceedings ArticleDOI
S. Auberger1, C. Miro1
24 Oct 2005
TL;DR: This paper presents a fast video stabilization algorithm that allows a very robust correction of both translational and rotational jitter, while keeping a very low-cost, low-power solution.
Abstract: This paper presents a fast video stabilization algorithm that allows a very robust correction of both translational and rotational jitter, while keeping a very low-cost, low-power solution. A binary motion estimation is used in some key areas of the image to obtain a field of vectors, minimizing memory constraints. After a careful removal of the outlier motion vectors, the affine parameters that describe the global rotational and translational motion of the image are extracted. This motion is then properly filtered to retain only its intentional component. Performances of the algorithm and the complexity of a possible software solution in a realistic platform are evaluated.

38 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: This work presents a technique for robustly and automatically detect a set of user-selected facial features in images, like the eye pupils, the tip of the nose, the mouth centre, etc, based on a specific architecture of heterogeneous neural layers.
Abstract: We present a technique for robustly and automatically detect a set of user-selected facial features in images, like the eye pupils, the tip of the nose, the mouth centre, etc. Based on a specific architecture of heterogeneous neural layers, the proposed system automatically synthesises simple problem-specific feature extractors and classifiers from a training set of faces with annotated facial features. After training, the facial feature detection system acts like a pipeline of simple filters that treats the raw input face image as a whole and builds global facial feature maps, where facial feature positions can easily be retrieved by a simple search for global maxima. We experimentally show that our method is very robust to lighting and pose variations as well as noise and partial occlusions.

37 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this paper, a set of invariant geometrical attributes which characterize the defect shape is proposed, and an artificial neural network for defect classification is used, which consists in assigning the principal types of weld defects to four categories according to the morphological characteristics of the defects usually met in practice.
Abstract: The interpretation of possible weld discontinuities in industrial radiography is ensured by human interpreters. Consequently, it is submitted to subjective considerations such as the aptitude and the experiment of the interpreter, in addition of the poor quality of radiographic images, due essentially to the exposure conditions. These considerations make the weld quality interpretation inconsistent, labor intensive and sometimes biased. It is thus desirable to develop computer-aided techniques to assist the interpreter in evaluating the quality of the welded joints. For the characterization of the weld defect region, looking for features which are invariant regarding the usual geometric transformations proves to be necessary because the same defect can be seen from several angles according to the orientation and the distance from the welded framework to the radiation source. Thus, a set of invariant geometrical attributes which characterize the defect shape is proposed. The principal component analysis technique is used in order to reduce the number of attribute variables in the aim to give better performance for defect classification. Thereafter, an artificial neural network for weld defect classification was used. The proposed classification consists in assigning the principal types of weld defects to four categories according to the morphological characteristics of the defects usually met in practice.

33 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this article, a flexible statistical model of a dense set of facial surface points combined with an associated sparse set of skull landmarks is used to fit the model skull landmarks to corresponding landmarks indicated on a digital copy of the skull to be reconstructed.
Abstract: Forensic facial reconstruction aims at estimating the facial outlook associated to an unknown skull specimen. Estimation is based on tabulated average values of soft tissue thicknesses measured at a sparse set of landmarks on the skull. Traditional 'plastic' methods apply modeling clay or plasticine on a cast of the skull approximating the estimated tissue depths at the landmarks and interpolating in between. Current computerized techniques mimic this landmark interpolation procedure using a single facial surface template. However, the resulting reconstruction is biased by the specific choice of the template. We reduce this bias by using a flexible statistical model of a dense set of facial surface points combined with an associated sparse set of skull landmarks. The reconstruction is obtained by fitting the model skull landmarks to the corresponding landmarks indicated on a digital copy of the skull to be reconstructed. The fitting process alternates between changing the face-specific statistical model parameters and interpolating the remaining landmark fit error using a minimal bending thin-plate spline (TPS) based deformation. This iterative process is shown by experiment to converge to a realistic reconstruction of the face, independent of the initial template.

31 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: A simulation-based method for multitarget tracking and detection using sequential Monte Carlo (SMC), or particle filtering (PF) methods, which utilises the sequential importance sampling filter for recursive target state estimation, in conjunction with a 2-D data assignment method for measurement-to-target association.
Abstract: In this paper, we present a simulation-based method for multitarget tracking and detection using sequential Monte Carlo (SMC), or particle filtering (PF) methods. The proposed approach is applicable to nonlinear and non-Gaussian models for the target dynamics and measurement likelihood, where the environment is characterised by high clutter rate and low detection probability. The number of targets is estimated by continuously monitoring the events being represented by the regions of interest (ROIs) in the surveillance region. It follows that the proposed approach utilises the sequential importance sampling filter for recursive target state estimation, in conjunction with a 2-D data assignment method for measurement-to-target association. Computer simulations are also included to demonstrate and evaluate the performance of the proposed approach.

27 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: A speaker independent "liveness" verification method for audio-video identification systems that uses the correlation that exists between the lip movements and the speech produced to ensure that biometric cues being acquired are actual measurements from a live person who is present at the time of capture.
Abstract: In biometrics, it is crucial to detect impostors and thwart replay attacks. However, few researches have focused yet on the "liveness" verification. This test ensures that biometric cues being acquired are actual measurements from a live person who is present at the time of capture. Here, we propose a speaker independent "liveness" verification method for audio-video identification systems. It uses the correlation that exists between the lip movements and the speech produced. Two data analysis methods are considered to model this statistical link. Finally, according to tests carried out on the XM2VTS database, the best liveness verification EER achieved is 12.5%.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: This paper shows that minimization of L/sup 1/ + TV yields a self-dual and contrast invariant filter that can be expressed as a Markov random field on this tree and presents some results that demonstrate that these new filters can be particularly useful as a pre-processing stage before segmentation.
Abstract: This paper sheds new light on minimization of the total variation under the L/sup 1/-norm as data fidelity term (L/sup 1/ +TV) and its link with mathematical morphology. It is well known that morphological filters feature the property of being invariant with respect to any change of contrast. First, we show that minimization of L/sup 1/ + TV yields a self-dual and contrast invariant filter. Then, we further constrain the minimization process by only optimizing the grey levels of level sets of the image while keeping their boundaries fixed. This new constraint is maintained thanks to the fast level set transform, which yields a complete representation of the image as a tree. We show that this filter can be expressed as a Markov random field on this tree. Finally, we present some results that demonstrate that these new filters can be particularly useful as a pre-processing stage before segmentation.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: This paper investigates recognition rate results probability distributions of some well-known algorithms by using permutation methodology in a Monte Carlo sampling procedure and reports the only available detailed report on PCA, ICA and LDA comparative performance currently available in literature.
Abstract: In this paper we address the issue of evaluating face recognition algorithms using descriptive statistical tools. By using permutation methodology in a Monte Carlo sampling procedure, we investigate recognition rate results probability distributions of some well-known algorithms (namely, PCA, ICA and LDA). With a lot of contradictory literature on comparisons of those algorithms, we believe that this kind of independent study is important and serves to better understanding of each algorithm. We show how simplistic approach to comparing these algorithms can be misleading and propose a full statistical methodology to be used in future reports. By reporting detailed descriptive statistical results, this paper is the only available detailed report on PCA, ICA and LDA comparative performance currently available in literature. Our experiments show that the exact choice of images to be in a gallery or in a probe set has great effect on recognition results and this fact further emphasizes the importance of reporting detailed results. We hope that this study helps to advance the state of experiment design in computer vision.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this paper, the authors presented an approach to build 3D statistical models of the skull and the face with soft tissues from 3D CT scans, which is used by their reconstruction method to produce 3D soft tissues.
Abstract: The aim of craniofacial reconstruction is to produce a likeness of a face from the skull. Few works in computerized assisted facial reconstruction have been done in the past, due to poor machine performances and data availability, and major works are manual reconstructions. In this paper, we present an approach to build 3D statistical models of the skull and the face with soft tissues from 3D CT scans. This statistical model is used by our reconstruction method to produce 3D soft tissues from the skull of one individual. Results on real data are presented and are promising.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this paper, a target detection method from low contrast forward looking infrared (FLIR) images is proposed, which consists of following three stages: center-surround difference with local adaptive threshold is proposed in order to find salient areas in an input image, local thresholding is proposed to the local region of interest (ROf) based on the result of first step.
Abstract: A target detection method from low contrast forward looking infrared (FLIR) images is proposed. It is known that detecting small targets in remotely sensed image is difficult and challenging work. The goal is to identify target areas with small number of false alarms in a thermal infrared scene of battlefield. The proposed method consists of following three stages. First, center-surround difference with local adaptive threshold is proposed in order to find salient areas in an input image. Second, local thresholding is proposed to the local region of interest (ROf) based on the result of first step. The second step is needed to segment target silhouettes precisely. Third, the extracted binary target silhouettes are compared with target template using size and affinity to remove clutters. In the experiments, many natural infrared images with high variability are used to prove performance the proposed method. It is compared with a morphological method using receiver operating characteristic (ROC) curve and execution time. The result shows that our method is superior to the morphological method and it can be applied to automatic target recognition (ATR) system.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: A novel pre-SVM processing technique is presented, which performs pixel-level and multi-resolution analysis in order to discard portions of the frame that are not likely to contain pedestrians and allows exploiting the SVM as a very accurate classifier focused on the most critical cases.
Abstract: This paper describes the algorithms we developed for a new automotive night vision system for pedestrian detection based on near infrared (NIR) illuminators and sensors. The system applies in the night domain the SVM technique, which has already been successfully implemented in day-light applications, in this project we have developed optimizations in order to meet accuracy and time performance requirement for in-vehicle deployments. In particular, we present a novel pre-SVM processing technique, which performs pixel-level and multi-resolution analysis in order to discard portions of the frame that are not likely to contain pedestrians. This procedure allows exploiting the SVM as a very accurate classifier focused on the most critical cases.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this article, a color-based probabilistic tracker was proposed for tracking players on the playground during a sport game, where the players are being tracked in their natural environment and this environment is subjected to certain rules of the game.
Abstract: The interest in the field of computer aided analysis of sport events is ever growing and the ability of tracking objects during a sport event has become an elementary task for nearly every sport analysis system. We present in this paper a color based probabilistic tracker that is suitable for tracking players on the playground during a sport game. Since the players are being tracked in their natural environment and this environment is subjected to certain rules of the game, we use the concept of closed worlds, to model the scene context and thus improve the reliability of tracking.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: A new method for calculation of perfusion from the contrast agent profile of a sequence of X-ray angiograms is presented, which utilizes Wiener filtering for denoising of time signals.
Abstract: In this paper we present a method for extraction of functional information from a time-sequence of X-ray angiographic images. By observing contrast agent propagation profile in a region of the angiogram one can calculate a number of parameters of that profile. Each parameter can be used to construct a parametric image of the imaged area. Such parametric images present a functional rather than morphological aspect of the tissue. The most important functional parameter is perfusion. Perfusion is defined as a blood flow at the capillary level and is commonly used to detect ischemic areas. Perfusion CT and perfusion MRI (pMRI) modalities have commonly been used to extract perfusion data. In this paper, a new method for calculation of perfusion from the contrast agent profile of a sequence of X-ray angiograms is presented. The method utilizes Wiener filtering for denoising of time signals. The experimental results are computed on a sequence of cerebral angiograms.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this paper, a modified frequency response masking (FRM) technique was proposed for the synthesis of linear phase, sharp transition, low arithmetic complexity FlR filter, which is composed of lowpass and bandpass subfilters.
Abstract: This paper proposes a modified frequency response masking (FRM) technique for the synthesis of linear phase, sharp transition, low arithmetic complexity FlR filter. The structure is composed of lowpass and bandpass subfilters which are designed as linear phase, equiripple passband and computationally efficient FIR filters. The frequency response of the subfilters are modeled using trigonometric functions of frequency and the design yields closed form expressions for the impulse response coefficients of the subfilters. The slopes at the edges of the transition region of the subfilter are matched which makes the frequency response a continuous function of frequency and hence reduces the effects due to Gibb's phenomenon thereby reducing passband edge ripple of the subfilters. The bandpass filter eliminates one masking filter and a model filter from the basic FRM approach thereby simplifying the synthesis of the proposed modified FRM FIR filter.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: On-line signature verification for Tablet PC devices is studied and authentication performance experiments are reported considering both random and skilled forgeries by using a new database with over 3000 signatures.
Abstract: On-line signature verification for Tablet PC devices is studied. The on-line signature verification algorithm presented by the authors at the First International Signature Verification Competition (SVC 2004) is adapted to work in Tablet PC environments. An example prototype of securing access and securing document application using this Tablet PC system is also reported. Two different commercial Tablet PCs are evaluated, including information of interest for signature verification systems such as sampling and pressure statistics. Authentication performance experiments are reported considering both random and skilled forgeries by using a new database with over 3000 signatures.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: This paper proposes a detection approach not requiring the binarization of the difference image, and results are demonstrated for a crowded scene and evaluation of the proposed tracking framework is presented.
Abstract: Change detection by background subtraction is a common approach to detect moving foreground. The resulting difference image is usually thresholded to obtain objects based on pixel connectedness and resulting blob objects are subsequently tracked. This paper proposes a detection approach not requiring the binarization of the difference image. Local density maxima in the difference image - usually representing moving objects - are outlined by a fast non-parametric mean shift clustering procedure. Object tracking is carried out by updating and propagating cluster parameters over time using the mode seeking property of the mean shift procedure. For occluding targets, a fast procedure determining the object configuration maximizing image likelihood is presented. Detection and tracking results are demonstrated for a crowded scene and evaluation of the proposed tracking framework is presented.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: This paper describes the performance of combining Hough transform and hidden Markov models in a multifont Arabic OCR system and some promising experimental results are reported.
Abstract: Optical characters recognition (OCR) has been an active subject of research since the early days of Computers. Despite the age of the subject, it remains one of the most challenging and exciting areas of research in computer science. In recent years it has grown into a mature discipline, producing a huge body of work. Arabic character recognition has been one of the last major languages to receive attention. This is due, in part, to the cursive nature of the task since even printed Arabic characters are in cursive form. This paper describes the performance of combining Hough transform and hidden Markov models in a multifont Arabic OCR system. Experimental tests have been carried out on a set of 85,000 samples of characters corresponding to 5 different fonts from the most commonly used in Arabic writing. Some promising experimental results are reported.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this article, the spectral performance of the Bhattacharyya curve is compared with the spatial matching criterion, i.e. mean square difference, and the biased nature of the biased results is explored and demonstrated through numerous experiments with different kinds of non-rigid maneuvering objects in cluttered and less cluttered environments.
Abstract: Bhattacharyya coefficient is a popular method that uses color histograms to correlate images. In this paper, we show that when this method is applied to gray scale images, it produces biased results. The biased nature is explored and demonstrated through numerous experiments with different kinds of non-rigid maneuvering objects in cluttered and less cluttered environments. The spectral performance of the Bhattacharyya curve is compared with the spatial matching criterion i.e. mean square difference.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: A linear combination of three models (a connexionist model, an ellipse model, and a skin color model) is used for face localization using a product fusion of an eye model and a color model to find coarsely the eyes location.
Abstract: We present a new method dedicated to the localization of faces and eyes in color images. It combines different experts. A linear combination of three models (a connexionist model, an ellipse model, and a skin color model) is used for face localization. A product fusion of an eye model (Chinese transform) and a color model (modified GMM) find coarsely the eyes location. Then an "AND " operation between four sources of information extracted at these positions. This leads to a refined localization of the eyes.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: Using neighbourhood sequences on the n dimensional digital spaces, a formula to compute distances of any pairs of points was given in this article, where the authors underlined the special cases of 2 and 3-dimensional digital spaces.
Abstract: The neighbourhood sequences have got a very important role in the digital image processing. In this paper we give some new results from this area. Using neighbourhood sequences on the n dimensional digital spaces, we give a formula to compute distances of any pairs of points. By practical reasons we underline the special cases of 2 and 3 dimensional digital spaces. It is known that there are non-metrical distances defined by neighbourhood sequences. Furthermore, in this paper we are answering the question what the necessary and sufficient condition is to have metrical distances.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: A physics based model of human motion is proposed, which includes internal forces of the persons by the means of the Kalman filter, and the cylindrical envelopes, which produce collision avoiding forces when observed persons come to close proximity.
Abstract: The paper deals with the problem of computer vision based multi-person motion tracking, which in many cases suffers from lack of discriminating features of observed persons To solve this problem, a physics based model of human motion is proposed, which includes internal forces of the persons by the means of the Kalman filter, and the cylindrical envelopes, which produce collision avoiding forces when observed persons come to close proximity We tested the proposed method on two sequences, one from squash match, and the other from the basketball play and found out that the number of tracker mistakes significantly decreased

Proceedings ArticleDOI
24 Oct 2005
TL;DR: The possibility of using full 3-D cross-sectional CT images for establishing a reference database of densely sampled distances between the external surfaces of the skull and skin for automated cranio-facial reconstruction is investigated.
Abstract: In forensic cranio-facial reconstruction, facial features of an unknown individual are estimated from an unidentified skull, based on a mixture of experimentally obtained guidelines on the relationship between soft tissues and the underlying skeleton. In this paper, we investigate the possibility of using full 3-D cross-sectional CT images for establishing a reference database of densely sampled distances between the external surfaces of the skull and skin for automated cranio-facial reconstruction. For each CT image in the reference database, the hard tissue (skull) and extra-cranial soft tissue (skin) volumes are segmented and transformed into signed distance transform (sDT) maps. A simplified procedure for cranio-facial reconstruction was implemented, by warping all reference skull sDT maps to the target skull sDT. These warps are subsequently applied to the reference skin sDT maps and the zero level set of their arithmetic average is defined as the reconstructed target skin surface. Initial results are shown to proof the validity of the concept, but further refinement of the procedures involved and a qualitative validation are required.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: The task of additional compression of images earlier coded using JPEG is considered and a novel efficient method for coding quantized DCT coefficients is proposed, based on coefficient separation into bit planes, taking into account correlation between the values of neighbor coefficients in blocks.
Abstract: The task of additional compression of images earlier coded using JPEG is considered. A novel efficient method for coding quantized DCT coefficients is proposed. It is based on coefficient separation into bit planes, taking into account correlation between the values of neighbor coefficients in blocks, between the values of the corresponding coefficients of neighbor blocks as well as between the corresponding coefficients of different color layers. It is shown that the designed technique allows for images already compressed by JPEG to additionally increase compression ratio by 1.3...2.3 times without introducing additional losses.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: In this paper, a method to use remote sensing and GIS to help the task of area reduction in mine action is presented, synthesizing all relevant information in thematic maps, called danger maps, that can be used as basis for area reduction.
Abstract: We present here a method to use remote sensing and GIS to help the task of area reduction in mine action. The goal is to synthesize all relevant information in thematic maps, called danger maps, that can be used as basis for area reduction. The information presented in the maps can be extracted from the remote sensing data, coming from the mine action centre mine information system (MIS) or be added after discussion with experts. The blind tests performed on mine-suspected areas in Croatia have shown that the method had a reduction rate of 26% and an error rate of 0.1%.

Proceedings ArticleDOI
24 Oct 2005
TL;DR: A new method for forensic facial soft-tissue reconstruction is presented, based on a nonlinear warping technique using radial basis functions known as thin-plate splines extended to 3D space.
Abstract: Facial reconstruction is important in several scientific areas, especially in forensic science and archaeology. In both areas the basis of all work is a skull find of a dead person which should be reconstructed. This helps with the identification of a skeleton from an open case of death or the comparison of facial features between modern and ancient human beings. In this paper a new method for forensic facial soft-tissue reconstruction is presented. It is based on a nonlinear warping technique using radial basis functions known as thin-plate splines extended to 3D space. To minimize the amount of errors a regularized thin-plate spline version was implemented. As for the manual facial reconstruction procedure the forensic expert has to attach soft tissue on a skull find. However, since the conventional, manual 4-step approach of i) examination of the skull, ii) development of a reconstruction plan, iii) practical sculpturing and iv) mask design is very time consuming, multi-modality elastic matching of 3D MRI soft tissue onto the 3D CT image of a skull find is proposed.