scispace - formally typeset
Search or ask a question

Showing papers on "Image conversion published in 2013"


Journal ArticleDOI
Janusz Konrad1, Meng Wang2, Prakash Ishwar1, Chen Wu2, Debargha Mukherjee2 
TL;DR: This paper proposes a new class of methods that are based on the radically different approach of learning the 2D-to-3D conversion from examples, and develops two types of methods based on learning a point mapping from local image/video attributes to scene-depth at that pixel using a regression type idea.
Abstract: Despite a significant growth in the last few years, the availability of 3D content is still dwarfed by that of its 2D counterpart. To close this gap, many 2D-to-3D image and video conversion methods have been proposed. Methods involving human operators have been most successful but also time-consuming and costly. Automatic methods, which typically make use of a deterministic 3D scene model, have not yet achieved the same level of quality for they rely on assumptions that are often violated in practice. In this paper, we propose a new class of methods that are based on the radically different approach of learning the 2D-to-3D conversion from examples. We develop two types of methods. The first is based on learning a point mapping from local image/video attributes, such as color, spatial position, and, in the case of video, motion at each pixel, to scene-depth at that pixel using a regression type idea. The second method is based on globally estimating the entire depth map of a query image directly from a repository of 3D images ( image+depth pairs or stereopairs) using a nearest-neighbor regression type idea. We demonstrate both the efficacy and the computational efficiency of our methods on numerous 2D images and discuss their drawbacks and benefits. Although far from perfect, our results demonstrate that repositories of 3D content can be used for effective 2D-to-3D image conversion. An extension to video is immediate by enforcing temporal continuity of computed depth maps.

104 citations


Proceedings ArticleDOI
Ja-Won Seo1, Seong-Dae Kim1
18 Sep 2013
TL;DR: Experimental results demonstrate that the proposed ELSSP (Eigenvalue-weighted Linear Sum of Subspace Projections) method is superior to the state-of-the-art methods in terms of both conversion speed and image quality.
Abstract: In this paper, we present a novel color-to-gray image conversion method which preserves both color and texture discriminabilities effectively. Unlike previous approaches, the proposed method does not require any user-specific parameters for conversion. Moreover, the computational complexity is low enough to be applied to real-time applications. These breakthroughs are achieved by applying the ELSSP (Eigenvalue-weighted Linear Sum of Subspace Projections) method, which is proposed in this paper for the color-to-gray image conversion. Experimental results demonstrate that the proposed method is superior to the state-of-the-art methods in terms of both conversion speed and image quality.

28 citations


Patent
20 Dec 2013
TL;DR: In this article, a histogram of one or more characteristics of the reference image is used to generate an image conversion score, which is then converted to a modified image of a second image format having a second bit depth less than the first bit depth and the modified image is scanned onto the continuous scan display screen.
Abstract: Various embodiments relating to reducing memory bandwidth consumed by a continuous scan display screen are provided. In one embodiment, scoring criteria are applied to a reference image of a first image format having a first bit depth to generate an image conversion score. The scoring criteria are based on a histogram of one or more characteristics of the reference image. If the image conversion score is greater than a threshold value, then the reference image is converted to a modified image of a second image format having a second bit depth less than the first bit depth, and the modified image is scanned onto the continuous scan display screen. If the image conversion score is less than the threshold value, then the reference image is scanned onto the continuous scan display screen.

27 citations


Patent
Michael Dodd1, Rodrigo Carceroni1
11 Sep 2013
TL;DR: In this paper, a system for video encoding and conversion including an image resolution conversion component operable to convert a resolution of a source image frame from a first resolution to a second resolution to produce a first intermediate image frame at the second resolution.
Abstract: Implementations relate to a system for video encoding and conversion including an image resolution conversion component operable to convert a resolution of a source image frame from a first resolution to a second resolution to produce a first intermediate image frame at the second resolution; an image conversion component operable to receive the first intermediate image frame and convert an image size of the first intermediate image frame to another image frame size to produce a first viewable image frame; an image viewer component operable to display the first viewable image on a first display; a color space conversion component comprising a luminance conversion component and a chrominance operable to receive the first viewable image frame and convent a first luminance value and a first chrominance value of the first viewable image frame to a second intermediate image frame having a second luminance value and a second chrominance value.

24 citations


Patent
29 Oct 2013
TL;DR: In this paper, an image detection unit (106) extracts image feature values from images of each camera (101) and an image conversion unit (105) computes a blend rate according to the image feature value values and composites an image of a superposition area wherein a plurality of camera images overlap.
Abstract: An objective of the present invention is to generate a more natural composite image wherein a solid object (obstacle) is easily seen. An image detection unit (106) extracts image feature values from images of each camera (101). An image conversion unit (105) computes a blend rate according to the image feature values and composites an image of a superposition area wherein a plurality of camera images overlap. An assessment is made of a correspondence of the image feature values of each image in the superposition area, and a determination is made that a solid object is present if the correlation is weak. Furthermore, a determination is made that the solid object is present in the superposition area if the image feature values in each image have locationally overlapping portions. In such a circumstance, the image is composited with the blend rate of the image with a greater image feature value set large.

24 citations


Patent
12 Jul 2013
TL;DR: A three-dimensional object detection has an image capturing device, an image conversion unit, a 3D object detection unit, an object assessment unit, first and second foreign matter detection units and a controller as discussed by the authors.
Abstract: A three-dimensional object detection has an image capturing device, an image conversion unit, a three-dimensional object detection unit, a three-dimensional object assessment unit, first and second foreign matter detection units and a controller. The image capturing device captures images rearward of a vehicle. The three-dimensional object detection unit detects three-dimensional objects based on image information. The three-dimensional object assessment unit assesses whether or not a detected three-dimensional object is another vehicle. The foreign matter detection units detect whether or not foreign matter has adhered to a lens based on the change over time in luminance values for each predetermined pixel of the image capturing element and the change over time in the difference between an evaluation value and a reference value. The controller outputs control commands to the other means to suppress the assessment of foreign matter as another vehicle when foreign matter has been detected.

24 citations


Patent
Nagamasa Yoshinobu1
22 Feb 2013
TL;DR: In this article, an image conversion apparatus calculates, based on a first value for obtaining first coordinate values in a second image before first conversion, which correspond to coordinate values of one pixel in a first image after first conversion.
Abstract: An image conversion apparatus calculates, based on a first value for obtaining first coordinate values in a second image before first conversion, which correspond to coordinate values of one pixel in a first image after first conversion, a second value for obtaining second coordinate values in the second image, which correspond to coordinate values of a pixel adjacent to the one pixel in the first image. The apparatus converts the second coordinate values into third coordinate values for second conversion of converting a third image into the second image and converts the third image into the first image. In the calculation of the second value, addition or subtraction using a constant and a result of the calculation is iteratively executed for sequentially outputting values corresponding to results of multiplication of coordinate values of each pixel in the first image and the constant.

18 citations


Patent
28 Aug 2013
TL;DR: In this paper, the authors proposed a multifunctional intelligent image conversion system, which relates to the field of the video image processing, and a first conversion chip in the system is used for decoding the data of a camera link video image and inputting the converted data to a FPGA (field programmable gate array) chip, an SDI (serial digital interface) decoding chip is used to decode the SDI standard-definition image, after the time sequence is reintegrated by the FPGAs chip, the data can be directly converted to a video image conforming
Abstract: The invention relates to a multifunctional intelligent image conversion system, which relates to the field of the video image processing. A first conversion chip in the system is used for decoding the data of a camera link video image and inputting the converted data to a FPGA (field programmable gate array) chip, an SDI (serial digital interface) decoding chip is used for decoding the data of an SDI standard-definition image and inputting the converted data to the FPGA chip, after the time sequence is reintegrated by the FPGA chip, the data can be directly converted to a video image conforming to protocols of VGA (video graphics array), PAL (programmable array logic), SDI and camera link interface, the data also can be inputted to a DSP (digital signal processor) ship so as to solve the maximal value and the minimal value of the image gray scale and the resolution rate of the image, and after being under the pseudo-color enhanced processing and the image gray-scale stretching processing according to the image information calculated by the DSP chip, the image is outputted and displayed. Due to the adoption of the multifunctional intelligent image conversion system, conversion among multiple interfaces such as camera link, SDI, VGA and PAL can be realized, and the quality of the outputted image is high.

12 citations


Patent
25 Apr 2013
TL;DR: In this paper, an information processing apparatus includes an acquirement unit configured to acquire a color image captured by an image capturing unit, an image conversion unit that converts the acquired color image into a monochrome image, and an output unit that outputs information showing the commodity specified by the first recognition unit or the second recognition unit.
Abstract: According to one embodiment, an information processing apparatus includes an acquirement unit configured to acquire a color image captured by an image capturing unit, an image conversion unit configured to convert the acquired color image into a monochrome image, a first recognition unit configured to specify an commodity included in the image captured by the image capturing unit based on the monochrome image, a second recognition unit configured to specify the commodity included in the image captured by the image capturing unit based on the acquired color image if the commodity cannot be specified by the first recognition unit and an output unit configured to output information showing the commodity specified by the first recognition unit or the second recognition unit.

11 citations


Patent
03 Jul 2013
TL;DR: In this paper, a video display system, to which a plurality of camera video images are input, includes a vehicle information unit, a space generation unit, an image conversion unit and a camera information unit.
Abstract: PROBLEM TO BE SOLVED: To determine a virtual camera position (point of view) according to a running state when forming a virtual space and projecting an image, to thereby improve the view thereof.SOLUTION: A video display system, to which a plurality of camera video images are input, includes a vehicle information unit, a space generation unit, an image conversion unit and a camera information unit. According to information of a running state acquired from the vehicle information unit, the space generation unit generates a surrounding space in a range that can be imaged by a camera, so that the image conversion unit generates an image based on the camera image, to project it to the generated space. Also, the image conversion unit, while changing the virtual camera position or a blend factor at the imaging synthesis, generates a synthesis image of the plurality of camera images viewed from a virtual camera position set by the camera information unit.

10 citations


Patent
30 Aug 2013
TL;DR: In this article, a birds-eye-view image generation device includes a captured image acquisition unit, an image conversion unit, and a joint setting unit, which sets a line which extends in any direction on an opposite side to the vehicle image from the end point between two radial directions directed to the end points from the two imaging devices.
Abstract: A birds-eye-view image generation device includes a captured image acquisition unit, an image conversion unit, a birds-eye-view image combining unit, and a joint setting unit. The joint setting unit sets any position of a rim of a vehicle image corresponding to a vehicle included in the birds-eye-view image as an end point in an overlapping imaging range in two birds-eye-view images corresponding to two imaging devices of which imaging ranges overlap each other, and sets a line which extends in any direction on an opposite side to the vehicle image from the end point between two radial directions directed to the end point from the two imaging devices, as a joint which joins two birds-eye-view images which are combined.

Patent
26 Feb 2013
TL;DR: A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a 3D object detection unit, and a light source detection unit and a control unit.
Abstract: A three-dimensional object detection device includes an image capturing unit, an image conversion unit, a three-dimensional object detection unit, a light source detection unit and a control unit. The image conversion unit converts a viewpoint of the images obtained by the image capturing unit to create bird's-eye view images. The three-dimensional object detection unit detects a presence of a three-dimensional object within the adjacent lane. The three-dimensional object detection unit determines the presence of the three-dimensional object within the adjacent lane-when the difference waveform information is at a threshold value or higher. The control unit set a threshold value higher so that the three-dimensional object is more difficult to detect in a forward area than rearward area with respect to a line connecting the light source and the image capturing unit when the light source has been detected.

Patent
23 May 2013
TL;DR: In this article, a method of image conversion for signage, displaying an image on a display surface, comprises determining a shape model of a three-dimensional object, determining geometric properties of display surfaces including a position and orientation of the display surface in a space, determining a position of a viewpoint in the space, and computing an inverse perspective projection onto the display surfaces based on the position of the viewpoint, for example the camera position in space, to generate a display image, wherein the display image when displayed on the display display surface and viewed through the viewpoint appears to show the three-
Abstract: A method of image conversion for signage, displaying an image on a display surface, comprises determining (1) a shape model of a three-dimensional object, determining (2) geometric properties of a display surface including a position and orientation of the display surface in a space, determining (3) a position of a viewpoint in the space, determining (4) a position and orientation of the shape model in the space, and computing (6) an inverse perspective projection onto the display surface based on the position of the viewpoint, for example the camera position in space, to generate a display image, wherein the display image, when displayed on the display surface and viewed through the viewpoint, appears to show the three-dimensional object with a position and orientation according to the position and orientation of the shape model in the space.

Patent
24 Jul 2013
TL;DR: In this article, an image capturing means (10) having an photographic optical system (11) and adapted for capturing an image of a predetermined area, an image conversion means (31) for viewpoint-converting an image obtained by the capturing means to create a bird's-eye-view image, and a water droplet detection means (40) for setting an arbitrary attention point in the image, a plurality of first reference points inside an imaginary circle of an predetermined radius having the attention point as the center of the center thereof, and the plurality of second reference points corresponding to
Abstract: The present invention is provided with: an image capturing means (10) having an photographic optical system (11) and adapted for capturing an image of a predetermined area, an image conversion means (31) for viewpoint-converting an image obtained by the image capturing means to create a bird's-eye-view image; a water droplet detection means (40) for setting an arbitrary attention point in the image, a plurality of first reference points inside an imaginary circle of a predetermined radius having the attention point as the center thereof, and a plurality of second reference points corresponding to the first reference points outside the imaginary circle, detecting edge information between the first reference points and second reference points, and assessing the circularity strength of these edge information, thereby detecting water droplets attached to the photographic optical system; a first three-dimensional object detection means (33) for generating differential waveform information from the differential image of the bird's-eye-view images at different points in time obtained by the image conversion means and detecting a three-dimensional object based on the differential waveform information; a three-dimensional object assessment means (38) for assessing whether the three-dimensional object that has been detected by the first three-dimensional object detection means is another vehicle; a water droplet removal device (41) for removing water droplets attached to the photographic optical system; and a control unit (39) for operating the water droplet removal means in accordance with the water droplet attachment state detected by the water droplet detection means.

Patent
17 Apr 2013
TL;DR: In this article, a method and a device for dynamic video mosaic is presented, where the video image of each frame is processed by a GPU (graphics processing unit), and the video can be fluently played after being added with mosaic.
Abstract: The invention discloses a method and a device for realizing dynamic video mosaic The method mainly comprises the following steps of S101, establishing a feature training database by training an image collection; S102, establishing a timer, and grabbing frame data of video images in a timing way; S103, monitoring a feature area, and preprocessing the images; S104, comparing the preprocessed images with the images in the training database, so as to identify the feature area; S105, carrying out image conversion on the identified feature area; and S106, generating a video file according to the converted frame data Through the technical scheme, the method and the device have the advantage that the video image of each frame is processed by a GPU (graphics processing unit), and the video can be fluently played after being added with mosaic

Patent
11 Apr 2013
TL;DR: In this article, an ultrasonic diagnostic apparatus capable of contributing to the improvement of processing speed in combining the ultrasonic image with a character image and displaying the composite image was proposed.
Abstract: PROBLEM TO BE SOLVED: To provide an ultrasonic diagnostic apparatus capable of contributing to the improvement of processing speed in combining an ultrasonic image with a character image and displaying the composite image.SOLUTION: The ultrasonic diagnostic apparatus includes: an ultrasonic probe 201 for transmitting ultrasonic waves to a subject, receiving reflection waves and generating reflected echo signals; an ultrasonic image composing part 203 for generating an ultrasonic image based on the reflected echo signals; a character image composing part 205 for composing a character image visualizing information added to the ultrasonic image; an image combining part 206 for generating a composite image by combining the ultrasonic image with the character image; an image conversion/transfer part 204 for simultaneously executing a process of converting at least one of a buffer size or a file type of either the ultrasonic image or character image into at least one of the buffer size or file type of the other image, and a process of transferring one of the images to the image combining part 206.

Patent
29 Oct 2013
TL;DR: In this article, an image of a calibration reference pattern is captured and a plurality of first and second characteristic patterns of the reference pattern are identified, and an image is converted to an output image according to the coordinate conversion relationship.
Abstract: An image conversion method is provided. An image of a calibration reference pattern is captured. A plurality of first and a plurality of second characteristic patterns of the calibration reference pattern are identified. Coordinates of the first and second characteristic patterns in a first view angle coordinate system are obtained, and coordinates of the first and second characteristic patterns in a second view angle coordinate system are obtained, to obtain a coordinate conversion relationship between the first and second view angle coordinate systems. An input image is converted to an output image according to the coordinate conversion relationship.

Proceedings ArticleDOI
03 Jun 2013
TL;DR: A system that can automatically generate the depth map of the 2D image by segmenting the object by Sobel edge detection using different thresholds and thinning the image with Z.S thinning method to get the final depth map is proposed.
Abstract: In this paper, we proposed a system that can automatically generate the depth map of the 2D image. First we segment the object by Sobel edge detection using different thresholds. Then thin down the image with Z.S thinning method. The image transformed into a grid image that edge of the object becomes a pixel. By using the gradient depth map, we fill the grid with its depth value. After automatically generated the depth map of the original 2D image, we combined depth maps with different thresholds to get the final depth map and perform the 2D to 3D conversion hardware structure and system design of digital for classical antiques and document.

Patent
14 Oct 2013
TL;DR: In this paper, an image based lane recognition method is provided to get rid of all kinds of noises by dividing camera images into several parts, thereby maximizing the accuracy of recognizing lanes.
Abstract: PURPOSE: An image based lane recognition method is provided to get rid of all kinds of noises by dividing camera images into several parts, thereby maximizing the accuracy of recognizing lanes CONSTITUTION: An image based lane recognition method comprises the following steps: setting a region which will be used for image processing for detecting lanes in an image photographed by a camera mounted on a vehicle (s10); dividing the region into a plurality of regions (s20); detecting the edge of each region which contains lanes by using edge detecting method (s30); extracting a straight line through Hough transform by using the edge of each region (s40); extracting a plurality of points arranged disparately on the straight lanes extracted above (s50); setting lane models using the least square method with the points (s60); and judging if a vehicle escapes from the desired lanes or not (s70) [Reference numerals] (s10) Entire region of interest setting (initialization); (s15) Black & white image conversion; (s20) Unit region of interest setting; (s30) Edge information extraction; (s40) Line extraction; (s50) Point element extraction; (s60) Lane model setting; (s70) Lane breakaway determination

Patent
29 Oct 2013
TL;DR: In this paper, an image of a calibration reference pattern is captured and a plurality of first and second characteristic patterns of the reference pattern are identified, and an image is converted to an output image according to the coordinate conversion relationship.
Abstract: An image conversion method is provided. An image of a calibration reference pattern is captured. A plurality of first and a plurality of second characteristic patterns of the calibration reference pattern are identified. Coordinates of the first and second characteristic patterns in a first view angle coordinate system are obtained, and coordinates of the first and second characteristic patterns in a second view angle coordinate system are obtained, to obtain a coordinate conversion relationship between the first and second view angle coordinate systems. An input image is converted to an output image according to the coordinate conversion relationship.

Patent
25 Dec 2013
TL;DR: In this paper, a 3D model transformation system and method for image conversion is described, which consists of a capturing module, a processing module, and a model storage module, respectively connected with the capturing module and the storage module.
Abstract: The invention discloses a 3D model transformation system and method, and belongs to the technical field of image conversion The system comprises a capturing module, a processing module and a model storage module The processing module is respectively connected with the capturing module and the model storage module The method specifically comprises the steps that 1 original image data to be transformed are captured by means of the capturing module; 2 multiple kinds of feature information in the original image data are extracted by means of the processing module, and transformation is carried out on standard 3D model data corresponding to the feature information to form corresponding 3D model data; 3 the 3D model data are converted into a displayed 3D model image by means of a conversion module, and the 3D model image is displayed on a display module The 3D model transformation system and method have the advantages that computer equipment or other electronic equipment is adopted to convert a 2D flat image into a corresponding 3D model image, the process is rapid, cost is low, and implantation is easy

Patent
28 Nov 2013
TL;DR: In this article, a transmitting apparatus including an image obtaining unit configured to obtain image data having pixel information including color information and having a first resolution, an image conversion unit configured by the transmitter to delete the color information of at least a portion of pixels of the obtained image data, to rearrange the pixel information of a plurality of pixels, and to convert the image having the first resolution into image having a second resolution lower than the first resolutions.
Abstract: Provided is a transmitting apparatus including an image obtaining unit configured to obtain image data having pixel information including color information and having a first resolution, an image conversion unit configured to delete the color information of at least a portion of pixels of the obtained image data, to rearrange the pixel information of a plurality of pixels, and to convert the image data having the first resolution into image data having a second resolution lower than the first resolution, and an output unit configured to output, to a transmitter, the image data whose resolution has been converted from the first resolution into the second resolution by the image conversion unit, the transmitter having a maximum resolution of image data which the transmitter is allowed to wirelessly transmit to a receiving apparatus, the maximum resolution being the second resolution.

Patent
26 Jun 2013
TL;DR: In this paper, a real-time image processing unit (RAPU) was proposed to synthesize multi-images which do not have a dead angle area about front sight.
Abstract: PURPOSE: A multi image providing system and a multi image input device thereof are provided to obtain 'multi images which do not have a dead angle area about front sight' by using multiple cameras and synthesize the obtained multi images. CONSTITUTION: A horizontal viewing angle of multiple cameras about a front sight of a body part is more than 120° and less than 180°. The vertical viewing angle of the cameras about the front sight is more than 60° and less than 180°. A real-time image processing unit(240) respectively receives 'multi image information, a camera parameter, and an image conversion matrix' from 'a multi image input device, a camera distortion compensator(220), and an image conversion matrix generator(230)', corrects a multi image by using the camera parameter, and synthesizes the corrected multi image by using the image conversion matrix. [Reference numerals] (220) Camera distortion compensator; (230) Image conversion matrix generator; (240) Real-time image processing unit; (AA) Multi image information; (BB) Camera parameter; (CC) Preprocessor; (DD) Image conversion matrix; (EE) Synthesized image information

Patent
20 Jun 2013
TL;DR: In this paper, the problem of determining which external information processing device print data should be acquired from when a printer can acquire print data from a plurality of external Information Processing devices is addressed, and a solution is proposed.
Abstract: PROBLEM TO BE SOLVED: To determine which external information processing device print data should be acquired from when a printer can acquire print data from a plurality of external information processing devices.SOLUTION: It is determined from which external information processing device the print data should be acquired depending on the setting of image quality. When image quality setting information is normal image quality, a raster image is acquired from a print server. When the image quality setting information is high image quality, data acquisition/conversion is requested to an image conversion server, a half-tone image is acquired from the image conversion server.

Patent
03 Jun 2013
TL;DR: In this article, a feature information extracting unit (210) extracts feature information from an input image and a depth map initializing unit (220) generates an initial depth map about the input image based on the feature information.
Abstract: PURPOSE: A depth map generating apparatus, a depth map generating method, a stereoscopic image converting apparatus and a stereoscopic image converting method are provided to correct an error of a depth map for stereoscopic expression generated in an automatic stereoscopic converting procedure of an image. CONSTITUTION: A feature information extracting unit(210) extracts feature information from an input image. A depth map initializing unit(220) generates an initial depth map about the input image based on the feature information. An FFT(Fast Fourier Transform) transforming unit(230) performs FFT about the input image and transforms the input image into a frequency image. A depth map determining unit(240) calculates a correlation value by using an average value of the initial depth map and a representative value of the frequency image. [Reference numerals] (210) Feature information extracting unit; (220) Depth map initializing unit; (230) FFT(Fast Fourier Transform) transforming unit; (240) Depth map determining unit; (AA) 2D image

Patent
Ando Ichiro1
08 Jul 2013
TL;DR: A moving image compression device has an obtaining unit, an image conversion unit, and a compression processing unit as mentioned in this paper, where the obtaining unit obtains a RAW moving image having plural frames associated in an order of imaging, the frames having pixels of three different color components which are disposed periodically according to a color array with two rows and two columns.
Abstract: A moving image compression device has an obtaining unit, an image conversion unit, and a compression processing unit. The obtaining unit obtains a RAW moving image having plural frames associated in an order of imaging, the frames having pixels of three different color components which are disposed periodically according to a color array with two rows and two columns. The image conversion unit separates, in a target frame, a first pixel group corresponding to a first color component of odd rows and a second pixel group corresponding to the first color component of even rows, and alternately arrays, in a time base direction, a first image including the first pixel group and a second image including the second pixel group. The compression processing unit performs inter-frame prediction coding compression on the first image and the second image.

Patent
22 Apr 2013
TL;DR: In this paper, an image processing method includes the steps of reading a marker 22 printed with painting-like image data 32 printed on a printing medium 10 by a camera of a terminal 1-1, providing a link to a location of a URL address of a server indicated by the marker 22, and downloading original image data 21 stored in the location to display the data on a display unit 31.
Abstract: PROBLEM TO BE SOLVED: To allow a user to easily see an original picture of a picture subjected to image conversion from a printing medium of the converted picture.SOLUTION: An image processing method includes the steps of: reading a marker 22 printed with painting-like image data 32 printed on a printing medium 10 by a camera of a terminal 1-1; providing a link to a location of a URL address of a server indicated by the marker 22; and downloading original image data 21 stored in the location to display the data on a display unit 31.

Patent
05 Jun 2013
TL;DR: In this paper, a method and a system for internet image conversion is described, which comprises the following steps: 1) receiving an image conversion request which comprises a conversion parameter by a front end cache device, judging whether an image which is converted according to the image conversion requests is stored in a cache, if yes, picking up and returning the converted image from the cache to a client side, if not, sending the image converted request to a image conversion device, and 2 converting the image by the image converting device according to image conversion order, returning the image to the client side
Abstract: The invention discloses a method and a system for internet image conversion. The method comprises the following steps: 1 receiving an image conversion request which comprises a conversion parameter by a front end cache device, judging whether an image which is converted according to the image conversion request is stored in a cache, if yes, picking up and returning the converted image from the cache to a client side, if not, sending the image conversion request to an image conversion device; 2 converting the image by the image conversion device according to the image conversion request, returning the converted image to the client side, and storing the converted image to the cache of the front end cache device. The method and the system for the internet image conversion can solve the problem that conversion efficiency is low.

Patent
02 May 2013
TL;DR: In this paper, a method for estimating image conversion parameters is revealed, which uses an image capturing unit to capture at least one captured image of an object to an image processing unit for calculation.
Abstract: A method for estimating image conversion parameters is revealed. Firstly, using an image capturing unit to capture at least one captured image of an object to an image processing unit for calculation. The image processing unit can take linear brightness change of single captured image to estimate the image conversion parameters, also can take comparison of linear and non-linear images to estimate the image conversion parameters, further can take the difference of exposure quantities to estimate the image conversion parameters. Therefore, the estimation of the image conversion parameters can be finished well and easily.

01 Jan 2013
TL;DR: A new technique for embedding a watermark which will not be erased during image conversion from normal image to HDR image is proposed and is proved to be reversible, which allows for lossless recovery of original image from watermark image.
Abstract: In this paper, a generic visible watermarking with a capability of lossless image recovery is proposed. This method based on the use of deterministic one-to-one compound mapping of pixel values a variety of visible watermarks on the cover image. The image conversion for watermarks normal image into HDR image are used in various image segmentation applications. During image conversion from watermarked into HDR image. The embedded watermark will be erased due to the usage of tone mapping in HDR image conversion. In this paper we have proposed a new technique for embedding a watermark which will not be erased during image conversion from normal image to HDR image. Our proposed system that, the original HDR image is transformed to a reference image by applying logLUV transformation. Finally the reverse log transform is applied to obtain the watermarked version of the given original image when a generic TM (Tone mapping) is applied to the watermarked HDR image, the watermark is still presents long as the assumption of similarly between the log transform and TM. The compound mappings are proved to be reversible, which allows for lossless recovery of original image from watermark image.