scispace - formally typeset
Search or ask a question
Topic

Image conversion

About: Image conversion is a research topic. Over the lifetime, 2490 publications have been published within this topic receiving 19077 citations.


Papers
More filters
Patent
27 Dec 2013
TL;DR: In this article, the authors proposed an imaging apparatus which can image a visual field area limited in both vertical and lateral directions and can widely image a peripheral area of the visual field and can collectively display imaged results.
Abstract: PROBLEM TO BE SOLVED: To provide an imaging apparatus which can image a visual field area limited in both vertical and lateral directions and can widely image a peripheral area of the visual field area and can collectively display imaged results, and to provide an image processing method.SOLUTION: An imaging apparatus 1 includes: an imaging section 3 which images a visual field area for a whole circumference and generates image data of the visual field area; a face detecting section 4c for referring to face image information of a face image information storage section 9b which stores face image information on an image of a face of a person and detecting the face of the person included in the image data; a horizontal determination section 8 for determining whether the imaging apparatus 1 is horizontal or not; and an image conversion section 4a which corrects at least one of an aspect ratio and inclination for the face image data of the face which the face detecting section 4c detects when the horizontal determination section 8 determines that the imaging apparatus 1 is not horizontal.

2 citations

Patent
31 Jul 2009
TL;DR: In this paper, a learning-image coefficient vector in an intermediate eigenspace is acquired, and a weight factor is determined based on the correlation among shortest distance, direction, distribution form, or distribution expansion of the learning image coefficient vector and an input-image coefficients vector.
Abstract: PROBLEM TO BE SOLVED: To achieve highly accurate and robust image conversion that relaxes input condition of a source image. SOLUTION: A learning-image coefficient vector in an intermediate eigenspace is acquired (#10, 12). In the intermediate eigenspace, a weight factor is determined based on the correlation among shortest distance, direction, distribution form, or distribution expansion of the learning-image coefficient vector and an input-image coefficient vector (#62). The weight factor is used, to determine the ratio between a tensor super-resolution process using tensor projection method and other general super-resolution processes. A first super-resolution image generated by the tensor super-resolution process (#30, 34) and a second super-resolution image generated by the general super-resolution process (#64) are combined (#66). Thus, a high-quality image (#36) is restored from an input low-quality image. COPYRIGHT: (C)2011,JPO&INPIT

2 citations

Patent
10 May 2016
TL;DR: In this paper, a three-dimensional image matching method using a plurality of cameras is proposed, which includes a step of extracting feature points from the images taken by the cameras, matching the extracted feature points to each other from a random pair of cameras, measuring the distance and position between the matched feature points, and separating 3D surface elements of the target object through the mesh algorithm.
Abstract: A three-dimensional image matching method includes: a step of taking images of a target object using a plurality of cameras; a step of extracting feature points from the images taken by the cameras; a step of matching the extracted feature points to each other from a random pair of cameras; a step of measuring the distance and position between the matched feature points; a step of separating three-dimensional surface elements of the target object through the Mesh algorithm; and a step of performing image conversion of each divided three-dimensional surface element individually

2 citations

DOI
01 Jul 2021
TL;DR: A straightforward method to identify road lines using the edge feature is described on high-speed video images and works well under different daylight conditions, such as sunny, snowy or rainy days and inside the tunnels.
Abstract: Background and Objectives: Lane detection systems are an important part of safe and secure driving by alerting the driver in the event of deviations from the main lane. Lane detection can also save the lifes of car occupants if they deviate from the road due to driver distraction.Methods: In this paper, a real-time and illumination invariant lane detection method on high-speed video images is presented in three steps. In the first step, the necessary preprocessing including noise removal, image conversion from RGB colour to grey and the binarizing input image is done. Then, a polygon area as the region of interest is chosen in front of the vehicle to increase the processing speed. Finally, edges of the image in the region of interest are obtained with edge detection algorithm and then lanes on both sides of the vehicle are identified by using the Hough transform.Results: The implementation of the proposed method was performed on the IROADS database. The proposed method works well under different daylight conditions, such as sunny, snowy or rainy days and inside the tunnels. Implementation results show that the proposed algorithm has an average processing time of 28 milliseconds per frame and detection accuracy of 96.78%.Conclusion: In this paper a straightforward method to identify road lines using the edge feature is described on high-speed video images. ======================================================================================================Copyrights©2021 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================

2 citations

Patent
03 Dec 1996
TL;DR: In this paper, the color reproducibility of a color image of a print was improved when the image forming device generates the print according to measured density of color images B, G, R read by a reflected light at an image reader.
Abstract: PURPOSE: To improve the color reproducibility of a color image of a print when the image forming device generates the print according to measured density of color images B, G, R read by a reflected light at an image reader. CONSTITUTION: An image forming device 24 generates a color mixture image (N image) in which Y, M, C images are mixed and a densitometer 28 measures the analyzed density of Y, M, C components as to the Y, M, C images. Then an image reader 22 reads the Y, M, C, N images to obtain a measured density for B, G, R and an image processing unit 23 obtains a conversion function to obtain the analysis density from the measured density depending the measured density obtained by the image reader 22 and the analysis density measured by the densitometer 28.

2 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
83% related
Pixel
136.5K papers, 1.5M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
80% related
Image segmentation
79.6K papers, 1.8M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202132
202074
2019117
2018115
2017100
2016107