scispace - formally typeset
Search or ask a question

Showing papers by "Francoise Preteux published in 2005"


Journal ArticleDOI
TL;DR: This work develops a two-step method comprising a motion estimation step using a novel variational non-rigid registration technique based on generalized information measures, and a measurement step, yielding local and segmental deformation parameters over the whole myocardium.

63 citations


Journal ArticleDOI
TL;DR: A computational model of an oscillatory laminar flow of an incompressible Newtonian fluid has been carried out in the proximal part of human tracheobronchial trees, either normal or with a strongly stenosed right main bronchus.
Abstract: A computational model of an oscillatory laminar flow of an incompressible Newtonian fluid has been carried out in the proximal part of huaman tracheobronchial trees, either normal or with a strongly stenosed right main bronchus. After acquisition with a multislice spiral CT, the thoracic images are processed to reconstruct the geometry of the trachea and the six first brinchus generations and to virtually travel inside this duct network. The facetisation associated with the three-dimensional reconsturction of the tracheobronchial tree is improved to get a computation-adapted surface triangulation, which leads to a volumic mesh composed of tetrahedra. The Navier-Stokes equations associated with the classical boundary conditions and different values of the flow dimensionless parameters are solved using the finite element method. The airways are supposed to be rigid during rest breathing. The flow distribution among the set of bronchi is determined during the respiratory cycle. Cycle reproducibility and mesh size effects on the numerical results are examined. Helpful qualitative data are provided rather than accurate quantitative results in the context of multimodelling, from inmage processing to numerical simulations.

26 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors provide an overview of the state-of-the-art techniques recently developed within the emerging field of dynamic mesh compression, including static encoders, wavelet-based schemes, PCA-based approaches, differential temporal and spatio-temporal predictive techniques, and clustering-based representations.
Abstract: This paper provides an overview of the state-of-the-art techniques recently developed within the emerging field of dynamic mesh compression. Static encoders, wavelet-based schemes, PCA-based approaches, differential temporal and spatio-temporal predictive techniques, and clustering-based representations are considered, presented, analyzed, and objectively compared in terms of compression efficiency, algorithmic and computational aspects and offered functionalities (such as progressive transmission, scalable rendering, computational and algorithmic aspects, field of applicability...). The proposed comparative study reveals that: (1) clustering-based approaches offer the best compromise between compression performances and computational complexity; (2) PCA-based representations are highly efficient on long animated sequences (i.e. with number of mesh vertices much smaller than the number of frames) at the price of prohibitive computational complexity of the encoding process; (3) Spatio-temporal Dynapack predictors provides simple yet effective predictive schemes that outperforms simple predictors such as those considered within the interpolator compression node adopted by the MPEG-4 within the AFX standard; (4) Wavelet-based approaches, which provide the best compression performances for static meshes show here again good results, with the additional advantage of a fully progressive representation, but suffer from an applicability limited to large meshes with at least several thousands of vertices per connected component.

13 citations


Proceedings ArticleDOI
TL;DR: This paper presents a region-driven approach to statistical NRR based on regional non-parametric estimates of luminance distributions, and allows for compensating for respiratory/cardiac motion artifacts, and fitting a segmental heart model used for quantitatively assessing regional myocardial perfusion.
Abstract: Intensity-based Non Rigid Registration (NRR) techniques using statistical similarity measures have been widely used to address mono- and multimodal image alignment problems in a robust and segmentation-free way. In these approaches, registration is achieved by minimizing the discrepancy between luminance distributions. Classical similarity criteria, including mutual information, f-information and correlation ratio, rely on global luminance statistics over the whole image domain and do not incorporate spatial information. This may lead to inaccurate or geometrically inconsistent (though visually satisfying) alignment of homologous image structures, making these criteria unreliable for atlas-based segmentation purposes. This paper addresses these limitations and presents a region-driven approach to statistical NRR based on regional non-parametric estimates of luminance distributions. The latter are derived from a regional segmentation of the target image which is used as a fixed object/scene template and induces regionalized statistical similarity measures. We provide the expressions of these criteria in the case of generalized information measures and correlation ratio, and derive the corresponding gradient flows over parametric and non-parametric transforms spaces. This approach is then applied to the joint non rigid segmentation and registration of short-axis cardiac perfusion MR sequences using a bi-ventricular heart template. In this framework, region-driven NRR allows for compensating for respiratory/cardiac motion artifacts, and fitting a segmental heart model used for quantitatively assessing regional myocardial perfusion. Experiments have been performed on a 15 pathological subjects database, demonstrating the relevance of region-driven NRR over global NRR in terms of computational performance and registration accuracy with respect to an expert reference.

11 citations


Book ChapterDOI
26 Oct 2005
TL;DR: An automated 3D approach for the segmentation of the vascular structure in CT hepatic venography, providing the appropriate tools for such an investigation, and allows to discriminate the opacified vessels from the bone structures and liver parenchyma regardless of noise presence or inter-patient variability in contrast medium dispersion.
Abstract: Preventing complications during hepatic surgery in livingdonor transplantation or in oncologic resections requires a careful preoperative analysis of the hepatic venous anatomy. Such an analysis relies on CT hepatic venography data, which enhances the vascular structure due to contrast medium injection. However, a 3D investigation of the enhanced vascular anatomy based on typical computer vision tools is ineffective because of the large amount of occlusive opacities to be removed. This paper proposes an automated 3D approach for the segmentation of the vascular structure in CT hepatic venography, providing the appropriate tools for such an investigation. The developed methodology relies on advanced topological and morphological operators applied in monoand multiresolution filtering schemes. It allows to discriminate the opacified vessels from the bone structures and liver parenchyma regardless of noise presence or inter-patient variability in contrast medium dispersion. The proposed approach was demonstrated at different phases of hepatic perfusion and is currently under extensive validation in clinical routine.

9 citations


Proceedings ArticleDOI
TL;DR: An unsupervised four-step approach to quantitatively assessing myocardial perfusion is developed, allowing to automatically detect a region of interest for the heart over the whole sequence, and to select a reference frame with maximal myocardium contrast.
Abstract: Quantitatively assessing myocardial perfusion is a key issue for the diagnosis, therapeutic planning and patient follow-up of cardio-vascular diseases. To this end, perfusion MRI (p-MRI) has emerged as a valuable clinical investigation tool thanks to its ability of dynamically imaging the first pass of a contrast bolus in the framework of stress/rest exams. However, reliable techniques for automatically computing regional first pass curves from 2D short-axis cardiac p-MRI sequences remain to be elaborated. We address this problem and develop an unsupervised four-step approach comprising: (i) a coarse spatio-temporal segmentation step, allowing to automatically detect a region of interest for the heart over the whole sequence, and to select a reference frame with maximal myocardium contrast; (ii) a model-based variational segmentation step of the reference frame, yielding a bi-ventricular partition of the heart into left ventricle, right ventricle and myocardium components; (iii) a respiratory/cardiac motion artifacts compensation step using a novel region-driven intensity-based non rigid registration technique, allowing to elastically propagate the reference bi-ventricular segmentation over the whole sequence; (iv) a measurement step, delivering first-pass curves over each region of a segmental model of the myocardium. The performance of this approach is assessed over a database of 15 normal and pathological subjects, and compared with perfusion measurements delivered by a MRI manufacturer software package based on manual delineations by a medical expert.

9 citations


Journal Article
TL;DR: In this article, the concept of bones is taken into account as either a boundary constraint or a cost component for the quadric error, and a new weight factor is introduced to overcome an uncertainty in choosing target for decimation.
Abstract: Our work focuses on the simplification of MPEG-4 avatar models. Similar to other general purposed 3D models, these avatars often claim complex, highly detailed presentation to maintain a convincing level of realism. However, the full complexity of such models is not always required, especially when a client terminal - for the reason of portability and cost reduction - cannot or does not necessarily support high complex presentation. First, we deploy the well-known 3D simplification based on quadric error metric to create a simplified version of the avatar in question, taking into account that the avatar is also a 3D model based on manifold mesh. Within this general scope, we intro' duce a new weight factor to overcome an uncertainty in choosing target for decimation. Next, exploiting the biomechanical characteristic of avatars - having the underlying skeleton structure - we propose an adaptation of the simplifying technique to avatars. The concept of bones is taken into account as either a boundary constraint or a cost-component for the quadric error. Encouraging results can be obtained With these modified procedures.

7 citations


Book ChapterDOI
13 Nov 2005
TL;DR: This work deploys the well-known 3D simplification based on quadric error metric to create a simplified version of the avatar in question, taking into account that the avatar is also a 3D model based on manifold mesh, and proposes an adaptation of the simplifying technique to avatars.
Abstract: Our work focuses on the simplification of MPEG-4 avatar models. Similar to other general purposed 3D models, these avatars often claim complex, highly detailed presentation to maintain a convincing level of realism. However, the full complexity of such models is not always required, especially when a client terminal — for the reason of portability and cost reduction — cannot or does not necessarily support high complex presentation. First, we deploy the well-known 3D simplification based on quadric error metric to create a simplified version of the avatar in question, taking into account that the avatar is also a 3D model based on manifold mesh. Within this general scope, we introduce a new weight factor to overcome an uncertainty in choosing target for decimation. Next, exploiting the biomechanical characteristic of avatars — having the underlying skeleton structure — we propose an adaptation of the simplifying technique to avatars. The concept of bones is taken into account as either a boundary constraint or a cost-component for the quadric error. Encouraging results can be obtained with these modified procedures.

7 citations


Proceedings ArticleDOI
TL;DR: An automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data is proposed, which takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties.
Abstract: In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

4 citations


Book ChapterDOI
TL;DR: This paper presents the first statistical investigation on VLBV in the DCT (2D Discrete Cosine Transform) domain, thus identifying the particular way in which the V LBV obeys to the very popular Gaussian law and points to critical behaviour differences between the high and very low bitrate videos.
Abstract: While generally the watermarking methods are designed to protect high quality video (e.g. DVD), a continuously increasing demand for protecting very low bitrate video (VLBV) – e.g. in mobile networks, the video stream may be coded at 64kbit/s – is also met nowadays. In this respect, a special attention should be paid to the statistical behaviour of the video content. At our best knowledge, this paper presents the first statistical investigation on VLBV in the DCT (2D Discrete Cosine Transform) domain, thus identifying the particular way in which the VLBV obeys to the very popular Gaussian law. It also points to critical behaviour differences between the high and very low bitrate videos. These theoretical results are validated under the framework of watermarking experiments carried out in collaboration with the SFR mobile service provider (Vodafone group).

3 citations


Book ChapterDOI
01 Jan 2005
TL;DR: The steady improvements within the distributed network area and advanced communication protocols have promoted the emergence of 3D communities and immersion experiences in distributed 3D virtual environments.
Abstract: INTRODUCTION The first 3D virtual human model was designed and animated by means of the computer in the late '70s. Since then, virtual character models have become more and more popular, making a growing population able to impact the everyday real world. Starting from simple and easy-to-control models used in commercial games, to more complex virtual assistants for commercial 1 or informational 2 Web sites, to the new stars of virtual cinema, 3 television, 4 and advertising, 5 the 3D character model industry is currently booming. Moreover, the steady improvements within the distributed network area and advanced communication protocols have promoted the emergence of 3D communities 6 and immersion experiences (Thalmann, 2000) in distributed 3D virtual environments.

Proceedings ArticleDOI
16 May 2005
TL;DR: This work has been carried out within the framework of the so-called TOON project, supported by the French National Agency for Valorization of Research and financed by the Quadraxis company.
Abstract: This work has been carried out within the framework of the so-called TOON project, supported by the French National Agency for Valorization of Research and financed by the Quadraxis company The TOON platform proposes a unified platform for automating the 2D cartoon production chain involving 2D/3D reconstruction, registration and animation capabilities.

Proceedings Article
14 Jul 2005
TL;DR: An innovative two-dimensional approach for character recognition and segmentation is proposed that combines Markovian modeling, efficient decoding algorithm together with a windowed spectral features extraction scheme.
Abstract: Processing text components in multimedia contents remains a challenging issue for document indexing and retrieval. More specifically, handwritten characters processing is a very active field of pattern recognition. This paper describes an innovative two-dimensional approach for character recognition and segmentation. The method proposed combines Markovian modeling, efficient decoding algorithm together with a windowed spectral features extraction scheme. A rigorous evaluation methodology is achieved to analyse and discuss the performances obtained for digit and word recognition.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: The paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack.
Abstract: The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.

Proceedings Article
14 Jul 2005
TL;DR: An outstanding informed coding method developed for grey level still images and adapts and extends it for such colour video and allows a comparison between the spread spectrum and informed watermarking techniques in the mobile networks.
Abstract: Faced with the continuous increasing of processing and storage capabilities in the digital world, the intellectual right holders consider watermarking as an appropriate mean to protect their property. Emerging from the mobile network environments, the youngest member of the watermarking application family is the low rate video protection. This paper reconsiders an outstanding informed coding method developed for grey level still images and adapts and extends it for such colour video. The experiments were carried out in cooperation with the SFR wireless service provider in France and pointed to the significant improvements in some practical applications. These results also allow a comparison between the spread spectrum and informed watermarking techniques in the mobile networks.

Journal ArticleDOI
01 Jan 2005-Itbm-rbm
TL;DR: In this article, the authors decrit le developpement en cours d'un simulateur morphofonctionnel des voies aeriennes superieures et proximales chez lhomme pour l'aide au diagnostic, au geste medico-chirurgical et a ladministration de medicaments par inhalation.
Abstract: Resume Cette etude decrit le developpement en cours d’un simulateur morphofonctionnel des voies aeriennes superieures et proximales chez l’homme pour l’aide au diagnostic, au geste medicochirurgical et a l’administration de medicaments par inhalation. Ce travail pluridisciplinaire met en synergie des outils et des connaissances variees en imagerie medicale, modelisation physique et numerique en passant par la physiopathologie et la validation experimentale, lesquels sont structures autour de cinq sous-projets distincts decrits succinctement : (I) explorations sur patient ; (II) donnees et concepts anatomofonctionnels ; (III) modelisations physique et numerique ; (IV) simulateur morphofonctionnel des voies respiratoires et (V) validation in vivo. Cette etude fait partie du projet cooperatif intitule R-MOD, finance par Air Liquide et le ministere de la recherche. Air Liquide est le coordinateur du projet.

Proceedings Article
14 Jul 2005
TL;DR: This paper presents a study devoted to robust video watermarking in mobile networks: it reconsiders a method developed for regular networks and reevaluates it under the mobile constraints, and results fulfil properties as robustness, transparency, obliviousness, and low probability of false alarm.
Abstract: Music, video, and 3D characters are just some content examples that imposed themselves as a very important component of data distribution to mobile terminals. Hence, to reliably ascertain the related property rights is nowadays a crucial issue. This paper presents a study devoted to robust video watermarking in mobile networks: it reconsiders a method developed for regular networks and reevaluates it under the mobile constraints. Experiments were carried out in cooperation with the SFR wireless service provider in France. The obtained results fulfil properties as: robustness (with respect to common attacks), transparency, obliviousness, and low probability of false alarm.

Proceedings ArticleDOI
TL;DR: The proposed 3D reconstruction methodology combines 2D segmentation and 3D surface regularization approaches and provides airway lumen robust discrimination from the surrounding tissues, while preserving the connectivity relationship between the different anatomical structures.
Abstract: Under the framework of clinical respiratory investigation, providing accurate modalities for morpho-functional analysis is essential for diagnosis improvement, surgical planning and follow-up. This paper focuses on the upper airways investigation and develops an automated approach for 3D mesh reconstruction from MDCT acquisitions. In order to overcome the difficulties related to the complex morphology of the upper airways and to the image gray level heterogeneity of the airway lumens and thin bony septa, the proposed 3D reconstruction methodology combines 2D segmentation and 3D surface regularization approaches. The segmentation algorithm relies on mathematical morphology theory and provides airway lumen robust discrimination from the surrounding tissues, while preserving the connectivity relationship between the different anatomical structures. The 3D regularization step uses an energy-based modeling in order to achieve a smooth and well-fitted 3D surface of the upper airways. An accurate 3D mesh representation of the reconstructed airways makes it possible to develop specific clinical applications such as virtual endoscopy, surgical planning and computer assisted intervention. In addition, building up patient-specific 3D models of upper airways is highly valuable for the study and design of inhaled medication delivery via computational fluid dynamics (CFD) simulations.

Proceedings ArticleDOI
TL;DR: This paper investigates how the 3D facial animation techniques can be exploited within the specific framework of 2D cartoon production by selecting two different controller-based approaches: RBF- and wire-based deformations.
Abstract: This paper investigates how the 3D facial animation techniques can be exploited within the specific framework of 2D cartoon production. An overview of the most representative 3D facial animation techniques is first presented. Physical modeling, free form deformations, direct face parameterizations and controller-based approaches are identified and discussed in detail. From this critical analysis of the literature, we selected for evaluation purposes two different controller-based approaches: RBF- and wire-based deformations. Experimental results have been carried out on a corpus of 3D face models with both neutral and target expressions available. The RBF-based techniques provide smoother and more stable deformation fields at a lower modeling effort than the wire-based approaches. Both methods are appropriate for achieving 2D/3D deformation and automating the 2D cartoon production.

Proceedings Article
14 Jul 2005
TL;DR: The performances of the proposed compression scheme show significant gains when compared to those of the Spectral Compression, and outperforms TG and MPEG-4 encoders especially within the range of low bitrates.
Abstract: This paper proposes a new progressive compression scheme for 3D triangular meshes, based on a multipatch B-Spline representation. First, the mesh is segmented into multiple patches. Each patch is then parameterized, and approximated by a B-Spline surface. The B-Spline control points are stored into 2D images and compressed using optimized still image encoders. The initial mesh topology is lossless encoded by applying the Touma and Gotsman (TG) algorithm. The performances of the proposed compression scheme show significant gains (30% on average) when compared to those of the Spectral Compression (SC), and outperforms TG and MPEG-4 encoders especially within the range of low bitrates (less than 8 bits per vertex).

Proceedings Article
14 Jul 2005
TL;DR: A clinical multimedia system, providing the appropriate tools for bronchial reactivity and wall remodeling evaluation from MDCT successive examinations conditional to a treatment, makes it possible to estimate the impact of a therapeutic protocol in mild and severe asthmatics.
Abstract: In the framework of therapy efficiency assessment in asthma, this paper describes a clinical multimedia system, providing the appropriate tools for bronchial reactivity and wall remodeling evaluation from MDCT successive examinations conditional to a treatment. Relying on the 3D reconstruction of the bronchial tree, central axis analysis and accurate quantification capabilities, such a system makes it possible to estimate the impact of a therapeutic protocol in mild and severe asthmatics, as demonstrated by a clinical study here discussed.