scispace - formally typeset
Search or ask a question
Author

Xiangyun Liao

Other affiliations: Wuhan University
Bio: Xiangyun Liao is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Segmentation & Computer science. The author has an hindex of 10, co-authored 52 publications receiving 423 citations. Previous affiliations of Xiangyun Liao include Wuhan University.


Papers
More filters
Journal ArticleDOI
TL;DR: This work presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017.

216 citations

Journal ArticleDOI
TL;DR: The spatial channel-wise convolution, a convolutional operation along the direction of the channel of feature maps, is proposed to extract mapping relationship of spatial information between pixels, which facilitates learning the mapping relationship between pixels in the feature maps and distinguishing the tumors from the liver tissue.
Abstract: It is a challenge to automatically and accurately segment the liver and tumors in computed tomography (CT) images, as the problem of over-segmentation or under-segmentation often appears when the Hounsfield unit (Hu) of liver and tumors is close to the Hu of other tissues or background. In this paper, we propose the spatial channel-wise convolution, a convolutional operation along the direction of the channel of feature maps, to extract mapping relationship of spatial information between pixels, which facilitates learning the mapping relationship between pixels in the feature maps and distinguishing the tumors from the liver tissue. In addition, we put forward an iterative extending learning strategy, which optimizes the mapping relationship of spatial information between pixels at different scales and enables spatial channel-wise convolution to map the spatial information between pixels in high-level feature maps. Finally, we propose an end-to-end convolutional neural network called Channel-UNet, which takes UNet as the main structure of the network and adds spatial channel-wise convolution in each up-sampling and down-sampling module. The network can converge the optimized mapping relationship of spatial information between pixels extracted by spatial channel-wise convolution and information extracted by feature maps and realizes multi-scale information fusion. The proposed ChannelUNet is validated by the segmentation task on the 3Dircadb dataset. The Dice values of liver and tumors segmentation were 0.984 and 0.940, which is slightly superior to current best performance. Besides, compared with the current best method, the number of parameters of our method reduces by 25.7%, and the training time of our method reduces by 33.3%. The experimental results demonstrate the efficiency and high accuracy of Channel-UNet in liver and tumors segmentation in CT images.

77 citations

Journal ArticleDOI
Zhaoliang Duan, Zhiyong Yuan1, Xiangyun Liao, Weixin Si, Jianhui Zhao1 
TL;DR: A suit of 3D tracking and positioning of surgical instruments based on stereoscopic vision is proposed that can capture spatial movements of simulated surgical instrument in real time, and provide 6 degree of freedom information with the absolute error of less than 1 mm.
Abstract: 3D tracking and positioning of surgical instruments is an indispensable part of virtual Surgery training system, because it is the unique interface for trainee to communicate with virtual environment. A suit of 3D tracking and positioning of surgical instruments based on stereoscopic vision is proposed. It can capture spatial movements of simulated surgical instrument in real time, and provide 6 degree of freedom information with the absolute error of less than 1 mm. The experimental results show that the 3D tracking and positioning of surgical instruments is highly accurate, easily operated, and inexpensive. Combining with force sensor and embedded acquisition device, this 3D tracking and positioning method can be used as a measurement platform of physical parameters to realize the measurement of soft tissue parameters.

58 citations

Book ChapterDOI
10 Sep 2017
TL;DR: A deeply-supervised 3D U-Net is presented for fully automatic whole-heart segmentation by jointly using the multi-modal MRI and CT images to define the overall heart structure.
Abstract: Accurate whole-heart segmentation from multi-modality medical images (MRI, CT) plays an important role in many clinical applications, such as precision surgical planning and improvement of diagnosis and treatment. This paper presents a deeply-supervised 3D U-Net for fully automatic whole-heart segmentation by jointly using the multi-modal MRI and CT images. First, a 3D U-Net is employed to coarsely detect the whole heart and segment its region of interest, which can alleviate the impact of surrounding tissues. Then, we artificially enlarge the training set by extracting different regions of interest so as to train a deep network. We perform voxel-wise whole-heart segmentation with the end-to-end trained deeply-supervised 3D U-Net. Considering that different modality information of the whole heart has a certain complementary effect, we extract multi-modality features by fusing MRI and CT images to define the overall heart structure, and achieve final results. We evaluate our method on cardiac images from the multi-modality whole heart segmentation (MM-WHS) 2017 challenge.

41 citations

Journal ArticleDOI
TL;DR: A novel magnetic levitation haptic device based on electromagnetic principles to augment the tissue stiffness perception in virtual environment and proposes the idea that the effective magnetic field (EMF) is closely related to the coil attitude for the first time.
Abstract: Haptic-based tissue stiffness perception is essential for palpation training system, which can provide the surgeon haptic cues for improving the diagnostic abilities. However, current haptic devices, such as Geomagic Touch, fail to provide immersive and natural haptic interaction in virtual surgery due to the inherent mechanical friction, inertia, limited workspace and flawed haptic feedback. To tackle this issue, we design a novel magnetic levitation haptic device based on electromagnetic principles to augment the tissue stiffness perception in virtual environment. Users can naturally interact with the virtual tissue by tracking the motion of magnetic stylus using stereoscopic vision so that they can accurately sense the stiffness by the magnetic stylus, which moves in the magnetic field generated by our device. We propose the idea that the effective magnetic field (EMF) is closely related to the coil attitude for the first time. To fully harness the magnetic field and flexibly generate the specific magnetic field for obtaining required haptic perception, we adopt probability clouds to describe the requirement of interactive applications and put forward an algorithm to calculate the best coil attitude. Moreover, we design a control interface circuit and present a self-adaptive fuzzy proportion integration differentiation (PID) algorithm to precisely control the coil current. We evaluate our haptic device via a series of quantitative experiments which show the high consistency of the experimental and simulated magnetic flux density, the high accuracy (0.28 mm) of real-time 3D positioning and tracking of the magnetic stylus, the low power consumption of the adjustable coil configuration, and the tissue stiffness perception accuracy improvement by 2.38 percent with the self-adaptive fuzzy PID algorithm. We conduct a user study with 22 participants, and the results suggest most of the users can clearly and immersively perceive different tissue stiffness and easily detect the tissue abnormality. Experimental results demonstrate that our magnetic levitation haptic device can provide accurate tissue stiffness perception augmentation with natural and immersive haptic interaction.

33 citations


Cited by
More filters
Book ChapterDOI
01 Jan 1997
TL;DR: The boundary layer equations for plane, incompressible, and steady flow are described in this paper, where the boundary layer equation for plane incompressibility is defined in terms of boundary layers.
Abstract: The boundary layer equations for plane, incompressible, and steady flow are $$\matrix{ {u{{\partial u} \over {\partial x}} + v{{\partial u} \over {\partial y}} = - {1 \over \varrho }{{\partial p} \over {\partial x}} + v{{{\partial ^2}u} \over {\partial {y^2}}},} \cr {0 = {{\partial p} \over {\partial y}},} \cr {{{\partial u} \over {\partial x}} + {{\partial v} \over {\partial y}} = 0.} \cr }$$

2,598 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: This article provides a detailed review of the solutions above, summarizing both the technical novelties and empirical results, and compares the benefits and requirements of the surveyed methodologies and provides recommended solutions.

487 citations

Journal ArticleDOI
TL;DR: A narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends, and discusses the many innovations that have advanced in deep learning and how these tools facilitate U-nets.
Abstract: U-net is an image segmentation technique developed primarily for image segmentation tasks. These traits provide U-net with a high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in nearly all major image modalities, from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. Given that U-net’s potential is still increasing, this narrative literature review examines the numerous developments and breakthroughs in the U-net architecture and provides observations on recent trends. We also discuss the many innovations that have advanced in deep learning and discuss how these tools facilitate U-net. In addition, we review the different image modalities and application areas that have been enhanced by U-net.

425 citations