scispace - formally typeset
Search or ask a question

Showing papers on "Image conversion published in 2017"


Journal ArticleDOI
TL;DR: The effectiveness of the proposed grayscale conversion is confirmed by the comparative analysis performed on the color-to-gray benchmark dataset across 10 existing algorithms based on the standard objective measures, namely normalized cross-correlation, color contrast preservation ratio, color content fidelity ratio, E score and subjective evaluation.
Abstract: This paper provides an alternative framework for color-to-grayscale image conversion by exploiting the chrominance information present in the color image using singular value decomposition (SVD). In the proposed technique of color-to-grayscale image conversion, a weight matrix corresponds to the chrominance components is derived by reconstructing the chrominance data matrix (planes a* and b*) from the eigenvalues and eigenvectors computed using SVD. The final grayscale converted image is obtained by adding the weighted chrominance data to the luminous intensity which is kept intact for the CIEL*a*b* color space of the given color image. The effectiveness of the proposed grayscale conversion is confirmed by the comparative analysis performed on the color-to-gray benchmark dataset across 10 existing algorithms based on the standard objective measures, namely normalized cross-correlation, color contrast preservation ratio, color content fidelity ratio, E score and subjective evaluation.

41 citations


Journal ArticleDOI
25 Oct 2017-Sensors
TL;DR: An automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline and the use of a specific weighted polynomial regression is presented.
Abstract: Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’).

29 citations


Patent
01 Feb 2017
TL;DR: In this article, a multi-feature-fusion-convolutional-neural-network-based plankton image classification method was proposed, which combines the angle of biological morphology, the computer vision method, and the depth learning technology.
Abstract: The invention provides a multi-feature-fusion-convolutional-neural-network-based plankton image classification method. The method comprises: lots of clear plankton images are collected and a large-scale multi-type plankton image data set is constructed; a global feature and a local feature are extracted by using an image conversion and edge extraction algorithm; an original feature image, a global feature image, and a local feature image are inputted into a depth-learning multi-feature-fusion convolutional neural network to carry out training, thereby obtaining a multi-feature-fusion convolutional neural network model; and then the plankton images are inputted into the multi-feature-fusion convolutional neural network model and classification is carried out based on a finally outputted probability score. According to the invention, the angle of biological morphology, the computer vision method, and the depth learning technology are combined; and thus the classification accuracy for plankton images, especially large-scale multi-type plankton images is high.

21 citations


Patent
29 Sep 2017
TL;DR: In this article, an unpaired image conversion method using a cycle consistent adversarial network (CycleGAN) is proposed, which consists of a general module, a loss function module, an objective function module and a training network module.
Abstract: The invention provides an unpaired image conversion method using a cycle consistent adversarial network The main content of the unpaired image conversion method comprises a general module, a loss function module, an objective function module and a training network module The process is that firstly modeling of a discriminator is performed by using a generative adversarial network and a mapping function is designed for an original set domain X so that the generated image is enabled to have the image characteristics of a target set domain Y, secondary loss function modeling is performed on the conversion process, the classifier is enabled to be increasingly difficult to discriminate the generated image by minimizing the loss function and the success rate of unpaired image conversion can be enhanced Different styles of photos or images can be processed, the least square method and the maximum likelihood probability can be provided to minimize the loss function, and the simulation degree of image conversion can also be enhanced

21 citations


Patent
01 Sep 2017
TL;DR: In this paper, a non-monitored image is utilized to learn a bidirectional conversion function between two image domains in an image conversion network framework (UNIT), VAE and VAE are comprised, modeling for each image domain is carried out through utilizing the VAE, mutual action of an adversarial training target and a weight sharing constraint, corresponding images are generated in the two image domain, the conversion image is associated with an input image of each domain, and image reconstruction flow and image conversion flow problems can be solved through training network combination.
Abstract: The invention provides an image conversion method based on a variation automatic encoder and the generative adversarial network. The method is mainly characterized by comprising the variation automatic encoder (VAE), weight sharing, generating the generative adversarial network (GAN) and learning, in the process, a non-monitored image is utilized to learn a bidirectional conversion function between two image domains in an image conversion network framework (UNIT), VAE and VAE are comprised, modeling for each image domain is carried out through utilizing the VAE and the VAE, mutual action of an adversarial training target and a weight sharing constraint is carried out, corresponding images are generated in the two image domains, the conversion image is associated with an input image of each domain, and image reconstruction flow and image conversion flow problems can be solved through training network combination. The method is advantaged in that the non-monitoring image is utilized to the image conversion framework, images in the two domains having not any relations are made to accomplish conversion, a corresponding training data set formed by the images is not needed, efficiency and practicality are improved, and the method can be developed to non-monitoring language conversion.

21 citations


Patent
09 May 2017
TL;DR: In this article, a calibration device for an optical device including a two-dimensional image conversion element having a plurality of pixels and an optical system that forms an image-formation relationship between the image conversion elements and the three-dimensional world coordinate space is presented.
Abstract: A calibration device for an optical device including a two-dimensional image conversion element having a plurality of pixels and an optical system that forms an image-formation relationship between the image conversion element and the three-dimensional world coordinate space The calibration device includes: a calibration-data acquisition unit that acquires calibration data representing the correspondence between two-dimensional pixel coordinates in the image conversion element and three-dimensional world coordinates in the world coordinate space; and a parameter calculating unit that calculates parameters of a camera model by applying, to the calibration data acquired by the calibration-data acquisition unit, a camera model in which two coordinate values of the three-dimensional world coordinates are expressed as functions of the other one coordinate value of the world coordinates and the two coordinate values of the two-dimensional pixel coordinates

18 citations


Patent
Fu Ying, Wu Xi, Xing Xiaoyang, Li Yulian, Zhou Jiliu 
21 Nov 2017
TL;DR: In this paper, a feature loss-based medical image super-resolution reconstruction method was proposed, which involves an image conversion network fw, where neurons in the feed-forward full connection neural network are divided into different groups according to an information receiving sequence.
Abstract: The invention discloses a feature loss-based medical image super-resolution reconstruction method. The method involves an image conversion network fw; the image conversion network fw is a feed-forward full connection neural network; neurons in the feed-forward full connection neural network are divided into different groups according to an information receiving sequence; each group is regarded as a network layer; the neurons in each layer receive numerical outputs of the neurons in the previous layer to serve as inputs of themselves, and outputs of themselves are input to the next layer; information in the whole network is transmitted in a direction; and the image conversion network fw receives a low-resolution medical image having a size of H/4XW/4 and sent by the feed-forward neural network, and converts the low-resolution medical image having the size of H/4XW/4 into a high-resolution image having a size of HXW.

16 citations


Proceedings ArticleDOI
01 Sep 2017
TL;DR: The paper presents a new method of vehicle speed estimation using image data processing that employs conversion of greyscale input images into binary form based on small gradients in the input images.
Abstract: The paper presents a new method of vehicle speed estimation using image data processing. The presented method employs conversion of greyscale input images into binary form. Image conversion into binary form is based on small gradients in the input images. Contents of the obtained binary images correspond with traffic scenes presented in the input images. Vehicle speed is estimated on the basis of differences between appropriate ordinal image numbers in the sequence of input images. These ordinal image numbers are determined according to the changes of the state of the initial detection field and final detection field. The state of the detection fields is determined by analysis of their features. Changes of the features of the detection fields are caused by passing vehicles. Experimental results of vehicle speed estimation are provided.

11 citations


Patent
09 Jun 2017
TL;DR: In this paper, the authors proposed a method to improve the image splicing quality by converting a left image overlapping area into a right image conversion image according to the first optical flow field.
Abstract: The invention relates to the image processing technical field, especially relates to an image splicing method and device, and aims to improve the image splicing quality, thus effectively preventing ghosting, double-image and deformation problems of the spliced image. The method comprises the following steps: respectively determining a first optical flow field and a second optical flow field; converting a left image overlapping area into a left image conversion image according to the first optical flow field; converting a right image overlapping area into a right image conversion image according to the second optical flow field, thus enabling the obtained left and right image conversion images to be similar or same, preventing ghosting, double-image and deformation problems when image fusion is carried out according to the left and right image conversion images, and improving the image splicing quality.

8 citations


Proceedings ArticleDOI
01 Sep 2017
TL;DR: The present work is to analyze the significant and discriminative contrast and structure information preserved in the converted grayscale images using two different decolorization techniques called rgb2gray and singular value decomposition based color-to-grayscale image conversion (SVD) applied in the color image classification systems using the three different proposed features.
Abstract: In general, the three main modules of color image classification systems are: color-to-grayscale image conversion, feature extraction and classification. The color-to-grayscale image conversion is the important pre-processing step which must incorporate the significant and discriminative contrast and structure information in the converted grayscale images as in the original color image. All the existing techniques for color-to-grayscale image conversion preserves the significant contrast and structure information in the converted grayscale images in different manners. Hence, the present work is to analyze the significant and discriminative contrast and structure information preserved in the converted grayscale images using two different decolorization techniques called rgb2gray and singular value decomposition based color-to-grayscale image conversion (SVD) applied in the color image classification systems using the three different proposed features. The three different features for color image classification systems are proposed based on the combination of the existing dense SIFT features and the contrast & structure content computed using color-to-gray structure similarity index (C2G-SSIM) metric.

7 citations


Patent
20 Oct 2017
TL;DR: In this article, a detection method based on a test picture of a liquid crystal screen is presented, which includes a storage module, an image obtaining module, image conversion module, a checking module and a detection result module.
Abstract: The invention discloses a detection method based on a test picture of a liquid crystal screen. The detection method includes the following steps that 1, the screen picture in the test mode of the liquid crystal screen is obtained and converted into a grayscale image; 2, grayscale values of all pixel points in the grayscale image are obtained and compared with a preset grayscale value standard template, if the values are equal, detection is qualified, and otherwise the detection is unqualified. The liquid crystal screen is automatically detected, the detection efficiency is high, and the detection accuracy is high. The invention further discloses a detection system based on the test picture of the liquid crystal screen. The detection system comprises a storage module, an image obtaining module, an image conversion module, a checking module and a detection result module. The detection efficiency of the system is high.

Patent
20 Oct 2017
TL;DR: In this paper, an image conversion network processing method is executed based on a trained first network and comprises the following steps: obtaining a first image uploaded by the terminal; inputting the first image into the first network to obtain a second network corresponding to the style of the first images; transmitting the second network to the terminal, so that the terminal performs style processing on a to-be-processed second image by using the second networks.
Abstract: The invention discloses an image conversion network processing method, a server, a terminal, a computing device and a storage medium. The image conversion network processing method is executed based on a trained first network and comprises the following steps: obtaining a first image uploaded by the terminal; inputting the first image into the first network to obtain a second network corresponding to the style of the first image; transmitting the second network to the terminal, so that the terminal performs style processing on a to-be-processed second image by using the second network. By adoption of the technical scheme, the corresponding image conversion network can be obtained quickly by the trained first network, thereby improving the image conversion network obtaining efficiency and optimizing the image conversion network processing mode.

Patent
21 Aug 2017
TL;DR: In this paper, a method and a device to recognize an iris through pupil detection capable of improving accuracy of extracting iris image is presented. But the method is not suitable for the detection of the iris.
Abstract: The present invention provides a method and a device to recognize an iris through pupil detection capable of improving accuracy of extracting an iris image. According to an embodiment of the present invention, the device to recognize the iris comprises: an image conversion unit converting an original image of a subject being photographed obtained by an image capturing part into a grayscale image; an image binarization unit performing binarization with respect to the image converted into the grayscale; a pixel grouping unit having a same pixel value in the binary image, classifying an interconnected region into a same group; a pupil detection unit determining a region where a physical value belongs within a reference range among the regions grouped by the pixel grouping unit as a pupil region; and an iris image extraction unit specifying an iris region on an original image based on a location of the pupil region.

Patent
24 Nov 2017
TL;DR: In this article, an image stylization method is executed based on a trained first network, which includes the following steps that: a first image is acquired; the first image was inputted to the first network; and the second network was utilized to perform stylization processing on a second image to be processed, so that a third image corresponding to the second image was obtained.
Abstract: The invention discloses an image stylization processing method, an image stylization processing device, a computing apparatus and a computer storage medium The image stylization processing method is executed based on a trained first network The method includes the following steps that: a first image is acquired; the first image is inputted to the first network, so that a second network corresponding to the style of the first image is obtained; and the second network is utilized to perform stylization processing on a second image to be processed, so that a third image corresponding to the second image is obtained According to the image stylization processing method provided by the technical schemes of the present invention, the corresponding image conversion network can be quickly obtained by using the trained first network, and therefore, the efficiency of image stylization processing is improved, and an image stylization processing mode is optimized

Patent
29 Sep 2017
TL;DR: In this article, a method for converting pixel data of a tooth CT image into 3D printing data was proposed, and the method comprises the steps of acquiring a CT image of the tooth, extracting pixel data from the image so as to generate pixel contour point cloud, acquiring outer surface point cloud of tooth, performing coarse registration and fine registration on the pixel contours, respectively forming fine registration point cloud databases, setting a judgment value, verifying a fine registration result, combining the fine registration databases, acquiring a complete point cloud database of the teeth, and rebuilding a three-dimensional
Abstract: The invention relates to a method for converting pixel data of a tooth CT image into 3D printing data, and belongs to the technical field of image processing. The method comprises the steps of acquiring a CT image of a tooth, extracting pixel data of the image so as to generate pixel contour point cloud, acquiring outer surface point cloud of the tooth, performing coarse registration and fine registration on the pixel contour point cloud and the outer surface point cloud, respectively forming fine registration point cloud databases, setting a judgment value, verifying a fine registration result, combining the fine registration point cloud databases, acquiring a complete point cloud database of the tooth, rebuilding a three-dimensional grid database based on the complete point cloud database of the tooth, and forming an STL format supported by 3D printing equipment. According to the invention, a slice contour high-precision extraction method is adopted to realize adaptive extraction in order to adapt to a non-free curved surface of the tooth surface, CT scanning and non-contact scanning are combined innovatively, and existing mature algorithms are integrated, thereby realizing conversion of the pixel data of the tooth CT image into the 3D printing data, effectively improving the image conversion efficiency and precision, and being high in pertinence.

Journal ArticleDOI
TL;DR: An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color- to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images.
Abstract: A color image contains luminance and chrominance components representing the intensity and color information respectively. The objective of the work presented in this paper is to show the significance of incorporating the chrominance information for the task of scene classification. An improved color-to-grayscale image conversion algorithm by effectively incorporating the chrominance information is proposed using color-to-gay structure similarity index (C2G-SSIM) and singular value decomposition (SVD) to improve the perceptual quality of the converted grayscale images. The experimental result analysis based on the image quality assessment for image decolorization called C2G-SSIM and success rate (Cadik and COLOR250 datasets) shows that the proposed image decolorization technique performs better than 8 existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component in scene classification task is demonstrated using the deep belief network (DBN) based image classification system developed using dense scale invariant feature transform (SIFT) as features. The levels of chrominance information incorporated by the proposed image decolorization technique is confirmed by the improvement in the overall scene classification accuracy . Also, the overall scene classification performance is improved by the combination of models obtained using the proposed and the conventional decolorization methods.

Patent
15 Mar 2017
TL;DR: In this article, a bottom-up visual selective attention model was adopted to obtain image intensity, color and direction feature patterns from an input image through Gaussian pyramids and a "center-surround" operator.
Abstract: The invention relates to a mixed digital image halftoning method based on a significance visual attention model, and belongs to the technical field of image processing. The method is characterized by adopting a bottom-up visual selective attention model, and carrying out calculation to obtain image intensity, color and direction feature patterns from an input image through Gaussian pyramids and a "center-surround" operator; carrying out normalization on the feature patterns to enable the feature patterns to be superposed into a total significance pattern, and extracting regions of interest (ROIs) of the image; carrying out image halftoning on the images in the ROIs through a model-based weighted least square halftone iteration method; carrying out halftone image conversion in non regions-of-interest (NROI) through a tone-based error diffusion method, and carrying out halftone parallel computing in the two regions of the image; and objectively evaluating digital image halftoning performance through a quality evaluation method based on the selective attention model, and analyzing algorithm complexity to obtain an optimum halftone image.

Patent
20 Jun 2017
TL;DR: Wang et al. as mentioned in this paper proposed a video stabilizing method based on online total variation optimization and linear smoothing methods, which is fast in calculation efficiency, achieves the real-time processing effect, generates the stable video with high fidelity, and does not loss much image information, and has good robustness.
Abstract: The invention relates to a video stabilizing method based on online total variation optimization and belongs to the technical field of video processing. The method comprises the following steps of using feature point detection and matching to perform movement estimation of a jitter video, calculating an interframe conversion matrix, and obtaining a camera path of the original jitter video; optimizing the camera path with the online total variation optimization and linear smoothing methods, so as to obtain a stable camera path; and by the conversion relation between the original camera path and the stable camera path, subjecting the jitter video frame to image conversion to generate s stable video. Compared with the existing method, the method uses the interframe homographic conversion to describe the camera movement, and reduces the video difference by the online total variation optimization method and the linear smoothing method, so as to smooth the camera path. The optimization method is fast in calculation efficiency, achieves the real-time processing effect, generates the stable video with high fidelity, and does not loss much image information, and has good robustness.

Journal ArticleDOI
TL;DR: This algorithm provides overall accuracy and the average processing time with the images processing during day and night with the image segmentation of the vehicle number plate detection system.
Abstract: Objectives: To develop a well-organised system in the vehicle number plate detection. Methods: The major techniques used in the implementation of the designs are gray scale conversion, black and white image conversion, filling holes, border detection and image segmentation. This is helpful to easily detect the license plate and identify the vehicle details. This is implemented on the real images. Findings: This algorithm provides overall accuracy and the average processing time with the images processing during day and night. This system works well in different illumination conditions. Applications: This system can be used for Indian roads scenario since there is no proper lane marking system and there is always a chance to get many disturbances in the images. This technique is applicable mainly at different restricted areas and highly confidential areas.

Patent
01 Feb 2017
TL;DR: In this article, a 3D liquid crystal display screen is connected with a server to acquire a to-be-displayed image and an image conversion operation is carried out on each frame of acquired image one by one according to a display sequence to acquire either a left eye image or a right eye image of a current frame of image.
Abstract: The invention provides a holographic projection method and system. The method is realized by a holographic projection device, the holographic projection device comprises: a server and a 3D liquid crystal display screen connected with the server; the 3D liquid crystal display screen comprises two display screens which are vertically connected with one another, one of the two display screens is a vertical display screen that is vertically placed, and the other one is a horizontal display screen that is horizontally placed; the method comprises the following steps: the server acquires a to-be-displayed image; an image conversion operation is carried out on each frame of acquired image one by one according to a display sequence to acquire a left eye image or a right eye image of a current frame of image; once a frame of image is converted, the server divides the image acquired by conversion into an upper side image and a lower side image; and the server transmits the upper side image to the vertical display screen for display and transmits the lower side image to the horizontal display screen for display. By adoption of the holographic projection method and system provided by the invention, the image can be displayed at a relatively low cost, and a three-dimensional effect of holographic projection is generated.

Patent
12 Oct 2017
TL;DR: In this article, the authors propose a system for displaying an image projected from a projector accurately, so as to be superimposed over an actual landscape a driver is viewing, by using a filter design program.
Abstract: PROBLEM TO BE SOLVED: To provide a device for a vehicle, a program for the vehicle, and a filter design program capable of displaying an image projected from a projector accurately, so as to be superposed over an actual landscape a driver is viewing.SOLUTION: A device ECU 2 for vehicle in an embodiment comprises: an image creation part 10a for creating an image in a predetermined reference coordinate system; a view point position identification part 10b for identifying a view point position of a driver; a superimposing position identification part 10c for identifying a display position when an image (G1) is displayed in a view of the driver; an image conversion part 10d for creating an image (G2) which is an image converted from the image (G1) to be adapted into a coordinate system based on the view point position of the driver; and a notification part 10e for displaying the display image (G11) which can be adapted in the converted image using a projector 7, so as to be superimposed in a view field of the driver.SELECTED DRAWING: Figure 1

Patent
10 May 2017
TL;DR: In this paper, a fisheye image conversion method and device is presented, which does not need the steps of correcting the fishee image to a planar graph without distortion and then converting the planar graphs without distortion to the panorama.
Abstract: The embodiment of the invention provides a fisheye image conversion method and device. The method comprises: obtaining a fisheye image to be converted; converting the coordinates of each pixel in the fisheye image to the coordinates in a panorama, and establishing a mapping relation between the coordinates of each pixel in the fisheye image and the coordinates in the panorama; and according to the mapping relation, sampling the pixels corresponding to the coordinates in the panorama from the fisheye image, and converting the fisheye image to the panorama. The fisheye image conversion method and device do not need the steps of correcting the fisheye image to a planar graph without distortion and then converting the planar graph without distortion to the panorama to reduce the steps and links in the process, improve the conversion efficiency so as to realize the real-time conversion of the fisheye image obtained in real time.

Patent
26 Dec 2017
TL;DR: In this paper, an image conversion network acquisition method based on a trained first network is presented, which consists of two steps: a first image and a second image are acquired; and the inputting images are inputted to the first network respectively, according to a preset fusion weight.
Abstract: The invention discloses an image conversion network acquisition method, an acquisition device, computing equipment and a computer storage medium. The image conversion network acquisition method is executed based on a trained first network. The method comprises steps: a first image and a second image are acquired; and the first image and the second image are inputted to the first network respectively, according to a preset fusion weight, weighting operation is carried out on a weighting operation layer of the first network, and a second network corresponding to the style after the first image and the second image are fused is obtained. According to the technical scheme provided by the invention, by using the trained first network, the image conversion network corresponding to the style after the two style images are fused can be obtained quickly, the image conversion network efficiency can be effectively improved, and the image conversion network processing mode is optimized.

Journal ArticleDOI
TL;DR: In this article, an improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the colorto-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images.
Abstract: Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

Proceedings ArticleDOI
01 May 2017
TL;DR: Wang et al. as discussed by the authors proposed a new image process and BP neural network detection recognization method based on magnetic flux leakage data in order to realize accurate and effective identification on the surface of steel plate.
Abstract: In order to improve CCD recognition accuracy for defection on the steel plate surface caused by illumination uniformity of light source, light color, impact of site environmental and the low signal-to-noise ratio images on the surface of the steel plate collected by the system, this paper proposes a new image process and BP neural network detection recognization method based on magnetic flux leakage data in order to realize accurate and effective identification on the surface of steel plate. In this paper, non-destructive magnetic flux leakage detection technology is used to replace the CCD detection method to collect data. And then we use the image conversion technology to convert the data into images. After that, the image processing technology is used to detect the defects and extract the features. Finally, the BP neural network defect identification model is constructed to identify the defects. The simulation results show that the trained model has a strong ability to recognize length, width and depth of the defects. The new method can effectively detect and identify low-contrast and small defects in the weak signal, which can furtherly improve the detection resolution and sensitivity.

Patent
13 Jun 2017
TL;DR: In this article, a smart mathematics three-dimensional scale model's teaching display system, including a CCD industrial camera machine for becoming the image by light signal conversion the CCD image sensor of the signal of telecommunication.
Abstract: The utility model relates to a smart mathematics three -dimensional scale model's teaching display system, including being used for carrying out the CCD industrial camera machine gathered to mathematics three -dimensional scale model's image, be connected with the CCD industrial camera machine for become the image by light signal conversion the CCD image sensor of the signal of telecommunication, be connected with CCD image sensor for the singlechip that carries out assay and processing to the signal of telecommunication of receiving, it is connected to pass through drive controller with the singlechip for provide rotary power's step motor for the show platform, it is connected to pass through drive controller with the singlechip for provide the flexible motor of lift power for the show platform, be connected with the singlechip for image analysis's image acquisition analysis module, be connected with image acquisition analysis module for image processing's image processing module, be connected with the singlechip for image conversion's 3D image conversion module. The intelligent degree of this smart mathematics three -dimensional scale model is high, has the 3D image conversion function.

Patent
12 Dec 2017
TL;DR: In this article, a method and a device for three-dimensional information collection, analysis and processing is presented, where point information of two-dimensional image sets is mapped to obtain 3-dimensional space information, so that a 3D model of an indoor space can be rebuilt quickly.
Abstract: The invention provides a method and a device for three-dimensional information collection, analysis and processing. The method comprises the steps of: S1, scanning a three-dimensional space by starting a three-dimensional space scanning device to collect three-dimensional space image data; S2, performing modeling processing on the collected three-dimensional space image data and encoding and sorting the three-dimensional space image data according to the scanning time sequence to generate two-dimensional image sets; S3, performing mapping superposition on each two-dimensional image set in a three-dimensional coordinate system and building a three-dimensional space-approximate internal state, thereby achieving rebuilding of the size of the indoor three-dimensional space. The invention also provides a device for three-dimensional information collection, analysis and processing. The device comprises a three-dimensional space scanning device, an image acquiring module, an image processing module and an image conversion module. The method and the device have the advantages that point information of two-dimensional image sets is mapped to obtain three-dimensional space information, so that a three-dimensional model of an indoor space can be rebuilt quickly.

Patent
24 Nov 2017
TL;DR: In this article, a network training method for stylization of images is presented, which consists of a first sample image and a second sample image are extracted; a second network corresponding to the style of the first sample images is obtained according to a first network and the first image; the second network is utilized to generate a third sample image corresponding to another sample image; and the weight parameter of the network is updated according to loss between the third sample images and the second sample images.
Abstract: The present invention discloses a network training method, a network training device, a computing apparatus and a computer storage medium. The network training method is completed through a plurality of times of iteration. The one-time iteration process of the method includes the following steps that: a first sample image and a second sample image are extracted; a second network corresponding to the style of the first sample image is obtained according to a first network and the first sample image; the second network is utilized to generate a third sample image corresponding to the second sample image; the weight parameter of the first network is updated according to loss between the third sample image and the first sample image and loss between the third sample image and the second sample image; and the above training steps are iteratively performed until a predetermined convergence condition is satisfied. According to the network training method provided by the technical schemes of the present invention, the first network applicable to any style image and any content image can be obtained through training; the first network can facilitate the quick obtaining of the corresponding image conversion network; and therefore, the efficiency of image stylization can be improved.

Patent
17 May 2017
TL;DR: The embodiment of the utility model provides an image blurring's of the shooting that can avoid leading to the fact in the shake situation that the in-process of taking photo by plane appears problem need not to install cloud platform stabilizer in unmanned aerial vehicle, reduced UAV's weight, has increased the time of UAV continuation of the journey.
Abstract: The embodiment of the utility model discloses camera of taking photo by plane, camera of taking photo by plane sets up in unmanned aerial vehicle's cloud bench, camera of taking photo by plane includes: image sensor, treater and inertia measuring unit, image sensor with the treater is connected for gather the image, and will image conversion image forming gives the treater like signal transmission, inertia measuring unit with the treater is connected, is used for acquireing camera of taking photo by plane's motion status data, and will the motion status data send for the treater, the treater for according to received motion status data, carry out compensation processing to received image signal, and export the image signal behind the compensation processing. The embodiment of the utility model provides an image blurring 's of the shooting that can avoid leading to the fact in the shake situation that the in -process of taking photo by plane appears problem need not to install cloud platform stabilizer in unmanned aerial vehicle, reduced unmanned aerial vehicle's weight, has increased the time of unmanned aerial vehicle continuation of the journey.

Patent
01 Feb 2017
TL;DR: In this article, a method and device for constructing a classification model is presented, which consists of obtaining sample images respectively corresponding to a first image category and a second image category, and obtaining spectrograms corresponding to the sample images, wherein the spectrogram are obtained through image conversion of the sample image according to the bases with different sizes.
Abstract: The present invention provides a method and device for constructing a classification model. The method comprise: obtaining sample images respectively corresponding to a first image category and a second image category, and obtaining spectrograms corresponding to the sample images, wherein the spectrograms are obtained through image conversion of the sample images according to the bases with different sizes; determining the features respectively corresponding to the sample images according to the spectrograms, and constructing the sample image set according to the features respectively corresponding to the sample images and the sample images; and performing training of the sample images, and obtaining a classification model, wherein the classification model is configured to determine the image category corresponding to the images to be classified. According to the scheme, the spectrograms are obtained through image conversion of the sample images according to the bases with different sizes, and the features respectively corresponding to the sample images are obtained according to the spectrograms to construct the classification model so as to accurately detect the image category.