scispace - formally typeset
Search or ask a question
Topic

Image conversion

About: Image conversion is a research topic. Over the lifetime, 2490 publications have been published within this topic receiving 19077 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jul 2017
TL;DR: The implemented approach of the dGBM-MDC using the minimum distance classifier for supporting clinical diagnosis system in anatomic sections of T1-weighted 3D magnetic resonance imaging (MRI) datasets demonstrates the robustness and efficiency for the detection the GBM disease pattern as compared with manual slice-by-slice segmentation performance.
Abstract: In this study, a novel brain image processing method is proposed to detect one of the most aggressive malignant primary brain tumors, the glioblastoma multiforme (GBM), using the minimum distance classifier (dGBM-MDC) for supporting clinical diagnosis system in anatomic sections of T1-weighted 3D magnetic resonance imaging (MRI). In the approach, we begin with the image conversion in the model of L∗ component in the L∗a∗b∗ space, which provides an image model of pixel values of colors. Then a small sample region of colors is selected in order to compute the average value in each pattern color before the image pattern classification with the MDC; the MDC is employed to classify each pixel by calculating the Euclidean distance between that pixel and each color marker of image patterns for image segmentation. We implement the proposed approach of the dGBM-MDC in the samples of three anatomic sections of a T1w 3D MRI (axial, sagittal and coronal cross-sections) on the real-time GBM-3D-Slicer datasets. The implementation results demonstrate the robustness and efficiency for the detection the GBM disease pattern as compared with manual slice-by-slice segmentation performance.
Patent
21 Jul 2011
TL;DR: In this paper, a stereoscopic image conversion method and a stereo image conversion device are presented, where objects constituting targets for which depth values will be set are selected in a two-dimensional image to be subjected to conversion, a plurality of demarcation points are set along the boundaries of the objects, the inside areas surrounded by the demarcations are recognized as demarcated areas, depth values are set for the respective pixels located in the demarkation points, and then the corresponding pixels are moved to left or right in proportion to the depth values, thereby generating a
Abstract: Disclosed are a stereoscopic image conversion method and a stereoscopic image conversion device, wherein objects constituting targets for which depth values will be set are selected in a two-dimensional image to be subjected to conversion, a plurality of demarcation points are set along the boundaries of the objects, the inside areas surrounded by the demarcation points are recognised as demarcated areas, depth values are set for the respective pixels located in the demarcated areas, and then the respective pixels are moved to left or right in proportion to the depth values, thereby generating a processed image, and the image to be subjected to conversion and the processed image are used as a left-eye image and a right-eye image so as to display a three-dimensional stereoscopic image.
Posted Content
TL;DR: This paper takes a deeper look at the representation of documents as an image and subsequently utilizes very simple convolution based models taken as is from computer vision domain to convert text to a representation very similar to images, such that any deep network able to handling images is equally able to handle text.
Abstract: Text classification is a fundamental task in NLP applications. Latest research in this field has largely been divided into two major sub-fields. Learning representations is one sub-field and learning deeper models, both sequential and convolutional, which again connects back to the representation is the other side. We posit the idea that the stronger the representation is, the simpler classifier models are needed to achieve higher performance. In this paper we propose a completely novel direction to text classification research, wherein we convert text to a representation very similar to images, such that any deep network able to handle images is equally able to handle text. We take a deeper look at the representation of documents as an image and subsequently utilize very simple convolution based models taken as is from computer vision domain. This image can be cropped, re-scaled, re-sampled and augmented just like any other image to work with most of the state-of-the-art large convolution based models which have been designed to handle large image datasets. We show impressive results with some of the latest benchmarks in the related fields. We perform transfer learning experiments, both from text to text domain and also from image to text domain. We believe this is a paradigm shift from the way document understanding and text classification has been traditionally done, and will drive numerous novel research ideas in the community.
Patent
28 Dec 2012
TL;DR: In this paper, a method for cutting a radiation image conversion plate in which a crack is not generated and which has successful productivity even to a radiation imaging conversion plate generated by using a vapor phase deposition method was proposed.
Abstract: PROBLEM TO BE SOLVED: To provide a method for cutting a radiation image conversion plate in which a crack is not generated and which has successful productivity even to a radiation image conversion plate generated by using a vapor phase deposition method.SOLUTION: In this cutting method, when a radiation image conversion plate of multilayer structure including: a support formed by a high polymer material; and a phosphor layer formed on the support by a vapor phase deposition method is cut, cutting size and the number of radiation image conversion plates are input in a cutting device, and the radiation image conversion plate is cut into a plurality of radiation image conversion panels having desired size.
Patent
03 Dec 2019
TL;DR: In this article, a multi-pose part image generation method was proposed, which comprises of placing a single part on a turntable, rotation, collection, and filtering most noise.
Abstract: The invention discloses a multi-pose part image generation method and system. The multi-pose part image generation method comprises the following steps: A, placing: placing a single part on a turntable; B, rotation: an acquisition host driving a motor to rotate, and the motor driving a turntable to rotate through a rotating shaft and also driving the single part to rotate; C, collection: when thesingle part rotates, shooting the single part through a camera, and storing a picture through the acquisition host; and D, filtering: performing median filtering on the picture, and filtering most noise. According to the invention, placement and rotation are set; the effect of driving a single part to rotate is achieved; the effect of single-part image acquisition is achieved by acquisition, the effect of noise elimination is achieved through filtering, the effect of image conversion is achieved through conversion, the multi-part image can be simulated and generated, and therefore the complexmulti-part shooting process is omitted, and the time cost and the labor cost caused when the multi-part image is shot are saved.

Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
83% related
Pixel
136.5K papers, 1.5M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
80% related
Image segmentation
79.6K papers, 1.8M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202132
202074
2019117
2018115
2017100
2016107