scispace - formally typeset
Search or ask a question

Showing papers on "Image conversion published in 2018"


Journal ArticleDOI
TL;DR: The result of the accuracy performance of different overlying block size are influenced by the diverse size of forged area, distance between two forged areas and threshold value used for the research.
Abstract: Since powerful editing software is easily accessible, manipulation on images is expedient and easy without leaving any noticeable evidences. Hence, it turns out to be a challenging chore to authenticate the genuineness of images as it is impossible for human's naked eye to distinguish between the tampered image and actual image. Among the most common methods extensively used to copy and paste regions within the same image in tampering image is the copy-move method. Discrete Cosine Transform (DCT) has the ability to detect tampered regions accurately. Nevertheless, in terms of precision (FP) and recall (FN), the block size of overlapping block influenced the performance. In this paper, the researchers implemented the copy-move image forgery detection using DCT coefficient. Firstly, by using the standard image conversion technique, RGB image is transformed into grayscale image. Consequently, grayscale image is segregated into overlying blocks of m × m pixels, m = 4.8. 2D DCT coefficients are calculated and reposition into a feature vector using zig-zag scanning in every block. Eventually, lexicographic sort is used to sort the feature vectors. Finally, the duplicated block is located by the Euclidean Distance. In order to gauge the performance of the copy-move detection techniques with various block sizes with respect to accuracy and storage, threshold D_similar = 0.1 and distance threshold (N)_d = 100 are used to implement the 10 input images in order. Consequently, 4 × 4 overlying block size had high false positive thus decreased the accuracy of forged detection in terms of accuracy. However, 8 × 8 overlying block accomplished more accurately for forged detection in terms of precision and recall as compared to 4 × 4 overlying block. In a nutshell, the result of the accuracy performance of different overlying block size are influenced by the diverse size of forged area, distance between two forged areas and threshold value used for the research.

101 citations


Journal ArticleDOI
TL;DR: This work presents an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure, and estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images.
Abstract: There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

22 citations


Proceedings ArticleDOI
24 May 2018
TL;DR: This paper proposes a novel scheme based on fragment-to-grayscale image conversion and deep learning to extract more hidden features and therefore improve the accuracy of classification of file fragments.
Abstract: File fragment classification is an important step in digital forensics. The most popular method is based on traditional machine learning by extracting features like N-gram, Shannon entropy or Hamming weights. However, these features are far from enough to classify file fragments. In this paper, we propose a novel scheme based on fragment-to-grayscale image conversion and deep learning to extract more hidden features and therefore improve the accuracy of classification. Benefit from the multi-layered feature maps, our deep convolution neural network (CNN) model can extract nearly ten thousands of features through the non-linear connections between neurons. Our proposed CNN model was trained and tested on the public dataset GovDocs. The experiments results show that we can achieve 70.9% accuracy in classification, which is higher than those of existing works.

21 citations


Patent
06 Mar 2018
TL;DR: In this paper, a generative neural network with combination of the image content characteristics and the style characteristics is used for image conversion, and the resolution of the converted image is enhanced by using a super-resolution neural network sothat the high-resolution converted image can be acquired.
Abstract: The invention provides an image processing method, processing device and processing equipment. Image conversion is realized by using a generative neural network with combination of the image content characteristics and the style characteristics, and the resolution of the converted image outputted by the generative neural network is enhanced accordingly by using a super-resolution neural network sothat the high-resolution converted image can be acquired. The image processing method comprises the steps that an input image is acquired; a first noise image and a second noise image are acquired; image conversion processing is performed on the input image by using the generative neural network according to the input image and the first noise image so as to output the converted first output image; and high-resolution conversion processing is performed on the first output image and the second noise image by using the super-resolution neural network so as to output a second output image, wherein the first noise image and the second noise image are different.

21 citations


Journal ArticleDOI
TL;DR: A line based date spotting approach using hidden Markov model (HMM) which is used to detect the date information in a given text is proposed and the results show the effectiveness of the proposeddate spotting approach.

15 citations


Patent
21 Sep 2018
TL;DR: In this paper, the authors proposed an image conversion method which consists of two steps: obtaining a current image to be processed and inputting the current image into a trained target face image conversion model.
Abstract: The invention relates to an image conversion method. The method comprises the following steps: obtaining a current image to be processed, wherein the current image to be processed comprises a face part; inputting the current image to be processed to a trained target face image conversion model, wherein the trained target face image conversion model is used for converting the face part of the inputimage from a first style to a second style, and the trained target face image conversion model is trained by an original face image conversion model and a discrimination network model; and obtaininga target style face image output by the trained target face image conversion model. The image conversion method does not need to label the training samples, and is low in training cost and high in accuracy. Besides, the invention also provides an image conversion device, computer equipment and a storage medium.

15 citations


Patent
19 Jun 2018
TL;DR: In this paper, a facial image conversion method based on a cycle generative adversarial network (GAN) was proposed, which comprises the steps of using a generator network and a discriminator network to compete against each other, forming the cycle GAN by using a traditional GAN loss function and a new cyclic consistency loss function, improving the Wasserstein GAN, improving training of the GAN through loss, matching the SSIM loss and inputting a binary mask and the images during training, and applying an element product to reconstruct the loss.
Abstract: The invention provides a facial image conversion method based on a cycle generative adversarial network. The main content of the method comprises a Wasserstein generative adversarial network (WGAN), astructural similarity (SSIM) loss, background subtraction method and face mask and a generative adversarial network (GAN). The method comprises the steps of using a generator network and a discriminator network to compete against each other, forming the cycle GAN by using a traditional GAN loss function and a new cyclic consistency loss function, improving the WGAN, improving the training of the GAN through loss, matching the SSIM loss and brightness, contrast and structural information of a generated image and an input image, inputting a binary mask and the images during training, and applying an element product to reconstruct the loss. According to the method, the cycle generative adversarial network is used, the method has higher consistency and stability in converting facial expressions, facial details and edge details can be processed well, and thus a converted image is more natural and more realistic.

13 citations


Patent
15 Jan 2018
TL;DR: In this article, a neural network circuit is combined with a parallel processing processor, and an image conversion and analysis processing method using the same is presented, which can be analyzed and converted into a replaced image.
Abstract: The present invention relates to a computer structure combined with a neural network circuit and a parallel processing processor, and an image conversion and analysis processing method using the same. The computer structure of the present invention comprises: a neural network circuit for extracting information on a feature point, color, etc. for an image of an object, converting the same into a two-dimensional coordinate value for the image of the object, and outputting the coordinate value in a form of a packet; a plurality of processing units; a parallel processing processor for performing a multiple recognition circuit operation between pixels within an image patch formed around the extracted feature point; and an on-chip network apparatus connected between the neural network circuit and the parallel processing processor, and transmitting the packet for the coordinate values converted in the neural network circuit to the parallel processing processor. According to the present invention, a schematic process of the entire image is accelerated by a neural network circuit, so a detailed image processing is performed by a parallel processing digital processor of a high performance on the basis thereof. As a result, an existing image can be analyzed and converted into a replaced image.

10 citations


Patent
06 Sep 2018
TL;DR: In this paper, a cell visualization system includes a digital holographic microscopy (DHM) device, a training device, and a virtual staining device, which produces DHM images of cells and colorizes the images based on an algorithm generated by the training device using generative adversarial networks and unpaired training data.
Abstract: A cell visualization system includes a digital holographic microscopy (DHM) device, a training device, and a virtual staining device The DHM device produces DHM images of cells and the virtual staining device colorizes the DHM images based on an algorithm generated by the training device using generative adversarial networks and unpaired training data A computer-implemented method for producing a virtually stained DHM image includes acquiring an image conversion algorithm which was trained using the generative adversarial networks, receiving a DHM image with depictions of one or more cells and virtually staining the DHM image by processing the DHM image using the image conversion algorithm The virtually stained DHM image includes digital colorization of the one or more cells to imitate the appearance of a corresponding actually stained cell

10 citations


Patent
15 Jun 2018
TL;DR: Zhang et al. as discussed by the authors proposed an image domain conversion network based on generative adversarial networks (GAN) and a conversion method, which includes a U-shaped generative network, an authenticity authentication network and a pairing authentication network.
Abstract: The invention discloses an image domain conversion network based on generative adversarial networks (GAN) and a conversion method. The image domain conversion network includes a U-shaped generative network, an authenticity authentication network and a pairing authentication network. An image domain conversion process mainly includes the following steps: 1) training the U-shaped generative network,and establishing a network model of the U-shaped generative network; and 2) normalizing a to-be-converted image, and then input the image into the network model established through the step 1) to complete image domain conversion of the to-be-converted image. According to the image domain conversion network, an image domain conversion task of a local region in the image can be achieved, image local-domain conversion quality is high, judgment ability of the network is high, stability of image conversion is high, and authenticity of a generated image is greatly improved.

9 citations


Patent
27 Jul 2018
TL;DR: In this paper, a uniform generative adversarial network-based multi-domain image conversion technology is proposed, which comprises the following processes of: learning to discriminate true and false images by using a discriminator D and classifying true images into a corresponding domain; taking the images and a target domain label as inputs of a generator G to generatea false image; under the condition of giving an original domain label, trying to reconstruct an original image by G according to the false image, and finally trying to generate an image which is not different from the true image and can be classified
Abstract: The invention discloses a uniform generative adversarial network-based multi-domain image conversion technology. The technology comprises the main contents of: training a discriminator; converting from an original domain to a target domain; converting from the target domain to the original domain; and sheltering the discriminator. The technology comprises the following processes of: learning to discriminate true and false images by using a discriminator D and classifying true images into a corresponding domain; taking the images and a target domain label as inputs of a generator G to generatea false image; under the condition of giving an original domain label, trying to reconstruct an original image by G according to the false image; carrying out continuous learning by D to discriminatetrue images and synthesized images, and carrying out continuous learning by G to shelter D; and finally trying to generate an image which is not different from the true image and can be classified into the target domain by D by G. According to the technology, a model is used for executing conversion from multi-domain images to images, so that the image conversion quality is improved, and the ability of flexibly converting input images into expected target images is provided.

Patent
01 May 2018
TL;DR: In this paper, a deep learning-based image style migrating method and system is proposed, which comprises the steps of calculating the cost among a training graph, a style graph and a sprouting graph by using a VGG (Visual Geometry Group) network, modifying an image converting network by using an Adam optimizer according to the calculated cost till converging the image conversion network, saving the trained model file and finally inputting an image of which the style needs to be migrated into the model file to obtain an effect picture after the style migration.
Abstract: The invention discloses a deep learning-based image style migrating method and system and relates to the field of image processing. The method comprises the steps of calculating the cost among a training graph, a style graph and a sprouting graph by using a VGG (Visual Geometry Group) network, modifying an image converting network by using an Adam optimizer according to the calculated cost till converging the image conversion network, saving the trained model file and finally inputting an image of which the style needs to be migrated into the model file to obtain an effect picture after the style migration. By means of the method and system, ordinary pictures can be converted into graceful artistic style works; and proved by experiments, the method has a good learning ability for the texture of artistic images and the system can be realized under a cloud platform and has a very high load capacity.

Patent
19 Jan 2018
TL;DR: In this paper, an image conversion system and method is described, which is implemented on a computation device, wherein the computation device comprises at least one processor, at least a computerreadable storage medium and a communication port which is connected with an imaging device.
Abstract: The invention discloses an image conversion system and method. The method is implemented on a computation device, wherein the computation device comprises at least one processor, at least one computerreadable storage medium and a communication port which is connected with an imaging device. The method comprises the following steps of: obtaining a first group of projection data related to a firstdosage level; reconstructing a first image on the basis of the first group of projection data; determining a second group of projection data on the basis of the first group of projection data, whereinthe second group of projection data is related to a second dosage level, and the second dosage level is lower than the first dosage level; reconstructing a second image on the basis of the second group of projection data; and training a first neural network model on the basis of the first image and the second image, wherein the trained first neural network model is configured to convert a third image into a fourth image, and the fourth image presents a noise level lower than a noise level of the third image.

Patent
10 Dec 2018
TL;DR: In this paper, the authors present an artificial intelligent device capable of generating a learning image for machine running and a control method thereof, which includes a neural network candidate model selection unit for selecting at least one neural network model to perform machine learning among a plurality of previously stored neural network candidates.
Abstract: The present invention relates to an artificial intelligent device capable of generating a learning image for machine running and a control method thereof. The artificial intelligent device includes: a neural network candidate model selection unit for selecting at least one neural network model to perform machine learning among a plurality of previously stored neural network candidate models; an image data augment unit for generating a plurality of candidate image data from original image data by using at least one of a plurality of image conversion methods; an arithmetic efficiency evaluation unit for calculating the arithmetic efficiency of the at least one neural network model and the plurality of candidate image data; a learning image data selection unit that determines, based on the arithmetic efficiency, one neural network model from the at least one neural network model and at least one image data from the plurality of candidate image data; and a machine learning process unit that learns the determined at least one image data by using the determined one neural network model.

Patent
21 Sep 2018
TL;DR: An image conversion-based heterogeneous image block matching method provided by the invention has the following steps: acquiring training samples and test samples, constructing an image conversion network, training the image conversion model, constructing a feature extraction and matching network, predicting a matching result as discussed by the authors.
Abstract: An image conversion-based heterogeneous image block matching method provided by the invention has the following steps: acquiring training samples and test samples, constructing an image conversion network, training the image conversion network, constructing a feature extraction and matching network, training the feature extraction and matching network, predicting a matching result. The invention overcomes the problem that the feature difference of the heterogeneous image extraction is large and inaccurate in the prior art, effectively reduces the matching difficulty, and improves the accuracyand robustness of the heterogeneous image block matching.

Patent
23 Nov 2018
TL;DR: In this article, a video optimization processing method for a remote desktop is presented, which comprises the steps of registering a coder-decoder corresponding to a video decoding format supported by a video processing unit when initialization of a preset multimedia vide processing tool is started; judging whether the video decoding formats corresponding to the registered coderdecoder is matched with a coding format; if yes, calling the video processing units to decode video frame data; and returning a corresponding file descriptor; and transferring the file descriptor to a processing interface of a bitmap hardware acceleration module, thus allowing the bit
Abstract: The invention provides a video optimization processing method for a remote desktop. The method comprises the steps of registering a coder-decoder corresponding to a video decoding format supported bya video processing unit when initialization of a preset multimedia vide processing tool is started; judging whether the video decoding format corresponding to the registered coder-decoder is matched with a coding format; if yes, calling the video processing unit to decode video frame data; and returning a corresponding file descriptor; and transferring the file descriptor to a processing interfaceof a bitmap hardware acceleration module, thus allowing the bitmap hardware acceleration module to perform image conversion on the decoded video data. The invention also provides a video optimizationprocessing device for the remote desktop. According to the method and device provided by the invention, the technical problem that the video playing experience is poor due to the fact that the embedded platform of an existing ARM-SOC framework uses a software processing mode to decode and perform image conversion is solved.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: By means of psychometric experiments, it is found that the algorithm gives the most accurate image reproduction when used with the $\triangle E_{99}$ colour metric, and that it performs at the level of, or better than, other state-of-the-art spatial algorithms.
Abstract: We present an algorithm for conversion of colour images to greyscale. The underlying idea is that local perceptual colour differences in the colour image should translate into local differences in greylevel in the greyscale image. This is obtained by constructing a gradient for the greyscale image from the eigenvalues and eigenvectors of the structure tensor of the colour image, which, in turn, is computed by means of perceptual colour difference metrics. The greyscale image is then constructed from the gradient by means of linear anisotropic diffusion, where the diffusion tensor is constructed from the same structure tensor. By means of psychometric experiments, it is found that the algorithm gives the most accurate image reproduction when used with the $\triangle E_{99}$ colour metric, and that it performs at the level of, or better than, other state-of-the-art spatial algorithms. Surprisingly, the only algorithm that can compete in terms of accuracy is a simple luminance map computed as the $L^{*}$ channel of the image represented in the CIELAB colour space.

Patent
16 Oct 2018
TL;DR: In this article, an intelligent English assisted learning system consisting of a timer, AR spectacles, an image collector, a mini-projector, a sounder, a time arrangement entering mechanism, a display, and a central controller is presented.
Abstract: The invention relates to an intelligent English assisted learning system, the system comprises a timer, AR spectacles (1), an image collector (2), a mini-projector (3), a sounder, a time arrangement entering mechanism (5), a display (6) and a central controller. In the invention, AR technology is applied to an English learning system, through image acquisition of a real object and comparison of the acquired image with the object image stored in the system, the information of the current object can be identified after successful matching, then the object Chinese name and English word corresponding to the object information are extracted from the system memory, image conversion is conducted to the object Chinese name and English word in combination with real scene, therefore, association between graphics and English words can be realized. Meanwhile, by listening to the pronunciation of English words, the record depth of the words can be further deepened, and the learning efficiency and the effective utilization rate of time can be improved.

Patent
23 Jan 2018
TL;DR: In this article, a medical image display method and a display device is described. And the method comprises steps: after a user triggers an image display request, a first medical image which is requested currently and a second medical image that is corresponding to the first image are displayed in a display window to enable both medical images not to be displayed in an overlapped mode.
Abstract: The invention discloses a medical image display method and a display device. The method comprises steps: after a user triggers an image display request, a first medical image which is requested currently and a second medical image which is corresponding to the first medical image are displayed in an image display window to enable the first medical image and the second medical image not to be displayed in an overlapped mode. Thus, through only displaying the first medical image on a small screen and avoiding mutual overlapping between the images, a clear image display effect can be achieved; later, the user can operate the first medical image according to browsing requirements, during a first medical image conversion process, the display position and/or the display size of the second medical image can be adjusted adaptively, and thus, while the user can clearly browse the first medical image at a mobile end, the related information of the first medical image can be queried through the second medical image, and the user can find out the lesion through the displayed images.

Patent
20 Nov 2018
TL;DR: In this paper, a deep neural network circuit and a processor are coupled through a machine learning technique through abstraction through the combination of several nonlinear conversion techniques, and an image conversion and analysis processing method using the same.
Abstract: Various types of data are exponentially increasing through an IoT, a cloud, and the like which are performed through a wired/wireless network and appears in various industrial fields such as factory automation, medical equipment, and the like. The data can be used in image recognition, data analysis, and the like through an artificial intelligence function to be effectively used in interconnected information. The present invention relates to a computer structure in which a deep neural network circuit and a processor are coupled through a machine learning technique through abstraction through the combination of several nonlinear conversion techniques, and an image conversion and analysis processing method using the same. According to the present invention, data and image data of an image can be analyzed and processed by extracting various information collected through a cloud server and information of a feature point, a color, a line, a shape, and the like of structured data of various information, images, and the like corrected through a cloud server, converting an image of an object into 2D coordinates, and repeatedly performing processes of structuration, filtration, nonlinearization, extraction on software that includes a neural network circuit outputting the coordinates into a packet format and a plurality of processing units and performs multiple recognition circuit operations between pixels of an image patch formed based on the extracted feature point and multilayer-based data.

Patent
27 Sep 2018
TL;DR: In this paper, a method of performing gamut mapping on an input image for an image output device is described, which includes determining a protect range corresponding to a first percentage of the color codes of the input image and a compression range correspond to a second percentage of color codes based on the color distribution of the image.
Abstract: A method of performing gamut mapping on an input image for an image output device includes receiving the input image to analyze a color distribution of the input image; determining a protect range corresponding to a first percentage of color codes of the input image and a compression range corresponding to a second percentage of the color codes of the input image based on the color distribution of the input image; and moving at least one of the color codes of the input image outside the protect range of the color codes to the compression range by a compression algorithm to perform gamut mapping on the input image.

Proceedings ArticleDOI
01 Oct 2018
TL;DR: In this research, a visibility improvement method for pedestrian recognition on night vehicle camera images is proposed that input the nighttime images and the corresponding daytime images prepared by simulation to the Neural Network, which is one of Deep Learning, and learn the feature.
Abstract: In this research, we propose a visibility improvement method for pedestrian recognition on night vehicle camera images. In our method, we input the nighttime images and the corresponding daytime images prepared by simulation to the Neural Network, which is one of Deep Learning, and learn the feature. After that, we were able to perform image conversion experiments in the range of 5 m to 30 m from the own vehicle to the pedestrian using the model obtained by training, and to convert the image to bring the night image closer to the daytime image.

Patent
09 Mar 2018
TL;DR: In this paper, an image obtaining unit for obtaining a thermal image of an object, an image conversion unit for converting the thermal image into a binarized image, a distance-transformed image generation unit for generating a distance transformed image by applying distance transformation based on the binarised image, and a marker generation unit assigning a marker to each of the objects based on a distance transformed image; an image segmentation unit for performing watershed segmentation in cope with each object around the marker assigned to the object.
Abstract: The present invention relates to a device for tracking an object based on a thermal image, comprising: an image obtaining unit for obtaining a thermal image of an object; a binarized image conversion unit for converting the thermal image into a binarized image; a distance-transformed image generation unit for generating a distance-transformed image by applying distance transformation based on the binarized image; a marker generation unit for assigning a marker to each of the objects based on the distance-transformed image; an image segmentation unit for performing watershed segmentation in cope with each of the objects around the marker assigned to the object; and an object recognition unit for recognizing the object based on a result of the watershed segmentation, wherein the marker generation unit can update location of the marker corresponding to the object included in the thermal image to be updated, and can update the location of each of the objects by applying the watershed segmentation method around the updated marker.

Patent
17 May 2018
TL;DR: In this paper, the authors proposed a stereoscopic image generation system based on a body image of a subject imaged without using a position sensor and which can easily identify the subject and determine the size of the subject.
Abstract: The purpose of the present invention is to provide a stereoscopic image generation system which generates a stereoscopic image from a body image of a subject imaged without using a position sensor and which can easily identify the subject and determine the size of the subject. The stereoscopic image generation system includes: a reception unit that receives image data of an image including a subject imaged by an imaging device; an image processing unit that displays an image based on the image data received by the reception unit on a display screen of a display device; an image conversion unit that converts the image data into stereoscopic display data for displaying, on the display screen, a stereoscopic image in which apparent depth can be identified, on the basis of the image date received by the reception unit; and a determination unit which performs at least one of identification of the subject and determination of the size of the subject on the basis of the stereoscopic display data. The image conversion unit converts the image data such that the display state of the subject in the display screen differs according to the determination result from the determination unit.

Patent
21 Dec 2018
TL;DR: In this article, a method for determining the homography matrix comprises the following steps: dividing the fisheye image and the reference image into the same regions, and making the size of the subregions in the divided fishee image and corresponding sub-regions of the corresponding sub regions in the reference images be the same.
Abstract: The invention relates to a method and a system for determining a homography matrix and an image conversion method and a system thereof. The method for determining the homography matrix comprises the following steps: dividing the fisheye image and the reference image into the same regions, and making the size of the sub-regions in the divided fisheye image and the corresponding sub-regions in the reference image be the same, and the size of the fisheye image and the reference image be the same; Selecting a sub-region in the fisheye image as an initial sub-region, calculating a homography matrixof the initial sub-region according to the sub-region corresponding to the sub-region in the reference image; The homography matrices of the other subregions are calculated from the center of the initial subregion. The embodiment of the invention adopts the mode of dividing regions and each sub-region corresponds to a homography matrix respectively, which has high image coincidence accuracy, andavoids the problem that in the prior art, the whole fish-eye image adopts only one homography matrix, resulting in low image coincidence accuracy of the fish-eye camera.

Patent
04 Sep 2018
TL;DR: In this article, an image conversion method and device, a terminal, and a storage medium are described, which comprises the following steps: acquiring input data of a standard dynamic-range rendering image, and performing Gamma correction; converting the input data after the gamma conversion into linear brightness data corresponding to a high dynamic range rendering image format; and obtaining display brightness data of the high-dimensional range rendering images according to the linear brightness values, thereby performing the image display after the format conversion.
Abstract: The embodiment of the invention discloses an image conversion method and device, a terminal, and a storage medium. The method comprises the following steps: acquiring input data of a standard dynamicrange rendering image, and performing Gamma correction; converting the input data after the gamma conversion into linear brightness data corresponding to a high dynamic range rendering image format; and obtaining display brightness data of the high dynamic range rendering image according to the linear brightness data, thereby performing the image display after the format conversion. Through the image conversion method disclosed by the embodiment of the invention, the effects of stabilizing the display contrast feature in the image conversion process and reducing the computing complexity can berealized, and a screen display effect is improved.

Patent
18 May 2018
TL;DR: In this article, a multiple sequence alignment visualization method based on image processing is proposed, which includes following steps: S1, taking multiple amino acid sequences generated by a multiple-sequence alignment algorithm as input; S2, respectively defining different colors for different types of amino acids, and performing color conversion on the amino acid sequence; S3, combining with image conversion to enable each amino acid in the amino acids to correspond to one pixel in images, to enable color of each pixel to correspond with that of the corresponding amino acid and to convert multiple one-dimensional amino acid segments into
Abstract: The invention relates to a multiple sequence alignment visualization method based on image processing. The method includes following steps: S1, taking multiple amino acid sequences generated by a multiple sequence alignment algorithm as input; S2, respectively defining different colors for different types of amino acids, and performing color conversion on the amino acid sequences; S3, combining with image conversion to enable each amino acid in the amino acid sequences to correspond to one pixel in images, to enable color of each pixel to correspond to that of the corresponding amino acid andto convert multiple one-dimensional amino acid sequences into two-dimensional colored images; S4, utilizing an image segmentation method based on edge detection to segment converted images, and presenting segmented images to a user.

Patent
22 Mar 2018
TL;DR: In this paper, a 3D content-providing system is presented, which includes an imaging unit configured to acquire a two-dimensional (2D) image, an image conversion unit that extracts a rectangular region that surrounds the 2D image acquired by the imaging unit, and a display unit configurable to output the 3D image.
Abstract: A three-dimensional (3D) content providing system, a 3D content providing method, and a non-transitory computer-readable recording medium are provided. The 3D content providing system includes an imaging unit configured to acquire a two-dimensional (2D) image, an image conversion unit configured to extract a rectangular region that surrounds the 2D image acquired by the imaging unit, and to perform image warping with respect to the extracted rectangular region to thereby generate a 3D image corresponding to the 2D image, and a display unit configured to output the 3D image.

Patent
15 Jun 2018
TL;DR: In this paper, a video image quality diagnosis, analysis and detection equipment and application system consisting of a video acquisition module, an image extraction module, image analysis module and a warning module is presented.
Abstract: The invention discloses a video image quality diagnosis, analysis and detection equipment and application system, which comprises a video acquisition module, an image extraction module, an image analysis module and a warning module. The video acquisition module is used for collecting video data; the image extraction module is used for carrying out frame extraction on the collected video data to obtain a single frame video image; the image analysis module comprises an image conversion unit and an image analysis unit, wherein the image conversion unit is used for carrying out edge extraction processing on the single frame video image to obtain an edge image, and the image analysis unit is used for carrying out analysis on the edge image; and the warning module is used for outputting an imageanalysis result and warning users when having unqualified images. The video image quality diagnosis, analysis and detection equipment and application system can quickly carry out analysis on the video images and judge whether the image quality is qualified, and give an alarm when the image quality is poor.

Patent
10 Aug 2018
TL;DR: In this paper, a camera image conversion system of a vacuum-packaged image sensor chip is described, where a sealed cavity is formed in the housing, and a vacuum degree monitoring module is disposed in the sealed cavity.
Abstract: The invention discloses a camera image conversion system of a vacuum packaged image sensor chip. According to the technical scheme, the camera image conversion system of the vacuum package image sensor chip comprises a housing, an image conversion module, a power module for supplying power to the image conversion module and a data processing module for processing signals of the image conversion module, wherein the image conversion module, the power module and the data processing module are arranged in the housing; a sealed cavity is formed in the housing, and a vacuum degree monitoring moduleis disposed in the sealed cavity; a terminal control module is disposed on the outer side of the sealed cavity and connected to the vacuum degree monitoring module. The vacuum degree in the sealed cavity can be monitored in real time, vacuum degree signals can be sent in real time, and changes of the vacuum degree in the sealed cavity can be known more accurately and timely.