scispace - formally typeset
Search or ask a question

Showing papers on "Image conversion published in 2016"


Journal ArticleDOI
TL;DR: This work provides users with simple methods for detecting and correcting problems in the image conversion process, and serves as an overview for developers who wish to either develop their own tools or adapt the open source tools created by the authors.

566 citations


Journal ArticleDOI
TL;DR: A new automatic analysis method is proposed which provides a quantitative assessment of meibomian gland dysfunction and is fully automatic, provides fully reproducible results and is insensitive to parameter changes.

31 citations


Patent
Akihiro Hayasaka1
28 Jul 2016
TL;DR: In this article, an image processing device (10) includes a posture estimation unit (110) that estimates posture information including a yaw angle and a pitch angle of a person's face from an input image including the person face, and an image conversion unit (120) that generates a normalized face image in which an orientation of a face is corrected, on the basis of positions of a plurality of feature points in a face region image.
Abstract: An image processing device (10) includes a posture estimation unit (110) that estimates posture information including a yaw angle and a pitch angle of a person's face from an input image including the person's face, and an image conversion unit (120) that generates a normalized face image in which an orientation of a face is corrected, on the basis of positions of a plurality of feature points in a face region image which is a region including the person's face in the input image, positions of the plurality of feature points in a three-dimensional shape model of a person's face, and the posture information.

27 citations


Patent
28 Sep 2016
TL;DR: In this article, a method and a system used for converting a 2D image to a 3D image based on deep learning is presented, which is characterized in that pixel unit information of the 2D single parallax image is acquired by using a VGG16 deep convolutional neural network according to the pixel units information; a color histogram relation, a color space relation, and a texture relation between adjacent pixel units of the image are acquired.
Abstract: The invention provides a method and a system used for converting a 2D image to a 3D image based on deep learning. The method is characterized in that pixel unit information of a 2D single parallax image is acquired; the unary information of the 2D single parallax image is acquired by using a VGG16 deep convolutional neural network according to the pixel unit information; a color histogram relation, a color space relation, and a texture relation between adjacent pixel units of the 2D single parallax image are acquired; a multi-scale deep full convolutional neural network is trained according to the unary information, the color histogram relation, the color space relation, and the texture relation between the adjacent pixel units of the 2D single parallax image; the unit pixel block depth map of the 2D single parallax image is predicted by using the trained multi-scale deep full convolutional neural network; the unit pixel block depth image is input in a coloring device to acquire the 3D image corresponding to the 2D single parallax image. The defects of the prior art such as high costs and inaccurate result caused by manual operation of converting the 2D single parallax image to the depth image are prevented, and the automatic 2D-to-3D image conversion is realized.

19 citations


Book ChapterDOI
20 Apr 2016
TL;DR: The main aim of this article is introducing least information loss (LIL) algorithm as a novel approach to minimize the information loss caused by the transformation the primary camera signals to 8 bit per pixel.
Abstract: Nowadays, most digital images are captured and stored at 16 or 12 bit per pixel integers, however, most personal computers can only display images in 8 bit per pixel integers. Besides, each microarray experiment produces hundreds of images which need larger storage space if images are stored in 16 or 12 bit. This is in most cases done by conversion of single images by an algorithm, which is not apparent to the user. A simple method to avoid the problem is converting 16 or 12-bit images to 8 bit by direct division of the 12-bit intervals into 256 sections and counting the number of points in each of them. Although this approach preserves the proportion of camera signals, it leads to severe loss of information due to losses in intensity depth resolution. The main aim of this article is introducing least information loss (LIL) algorithm as a novel approach to minimize the information loss caused by the transformation the primary camera signals (16 or 12 bit per pixels) to 8 bit per pixel. Least information loss algorithm is based on the omission of unoccupied intensities and transforming remaining points to 8 bit. This approach not only preserve information by storing intervals in the image EXIF file for further analysis, but also it improves object contrast for better visual inspection and object oriented classification. LIL algorithm may be applied also in image series where it enables comparison of primary camera data at scales identical over the whole series. This is particularly important in cases that the coloration is only apparent and reflect various physical processes such as in microscopy imaging.

14 citations


Patent
14 Dec 2016
TL;DR: In this article, a stereo matching 3D reconstruction method based on dynamic programming is presented, which consists of adjusting the positions of the two video cameras so that the imaging planes of the cameras are parallel as possible.
Abstract: The invention discloses a stereo matching three-dimensional reconstruction method based on dynamic programming. A system according to the method is composed of two video cameras. The stereo matching three-dimensional reconstruction method comprises the following steps of (1), adjusting the positions of the two video cameras so that imaging planes of the two video cameras are parallel as possible; (2), performing calibration on a three-dimensional measuring system, obtaining inner parameters and outer parameters of the two video cameras, and obtaining a correspondence between pixel coordinates on an image and a world coordinate system; (3), performing epipolar line rectification and image conversion; (4), obtaining a parallax graph by means of a stereo matching algorithm based on dynamic programming; (5), performing parallax correction; and (6), obtaining a three-dimensional point cloud according to a video camera calibration parameter and the parallax graph through a spatial intersection method. The stereo matching three-dimensional reconstruction method has advantages of high parallax graph precision and high real-time performance. Furthermore the three-dimensional point cloud of the image can be accurately, quickly and automatically reconstructed.

13 citations


Patent
13 Apr 2016
TL;DR: In this article, a hybrid feature model fusing a salient map, an edge line map and a gradient map is adopted to obtain an energy function, and according to the energy function a line clipping operation is carried out to complete image scaling.
Abstract: The invention relates to an image scaling method based on content awareness, and relates to graphic image conversion in an image plane. An energy function is obtained by adopting a hybrid feature model fusing a salient map, an edge line map and a gradient map, and according to the energy function, a line clipping operation is carried out to complete image scaling. The image scaling method comprises the steps of: inputting a color image to carry out preprocessing; simultaneously carrying out extraction of the salient map and a salient target image of an original color image, extraction of the edge map of a gray scale image, which is fused with line information, and extraction of the gradient map of the gray scale image; fusing three feature maps by utilizing an HFPM (hybrid feature model) algorithm so as to obtain the energy function; and clipping the original image by using a line clipping algorithm. The method disclosed by the invention overcomes the defects that an existing line clipping method adopts a gradient map definition energy function of an image and still causes distortion and loss of partial image information in the image scaling process.

13 citations


Patent
13 Jul 2016
TL;DR: In this paper, an omnidirectional imaging system based on a monocular camera, and an imaging method thereof, belonging to the technical field of automobile electronic, is presented. And the imaging method includes the following steps: 1) acquiring fisheye images of the tail of a vehicle through a fishee camera, which can convert the acquired images into back projection aerial images; and 2) acquiring the aerial images in the image conversion device, integrating the image of the current frame with the global image in image storage device, and sending the integrated image to the image display
Abstract: The invention discloses an omnidirectional imaging system based on a monocular camera, and an imaging method thereof, belonging to the technical field of automobile electronic. The omnidirectional imaging system includes an image acquisition device, wherein the image acquisition device is connected with an image conversion device; the image conversion device is connected with an image processing device; the image processing device is connected with an image display device; an image storage device is connected with the image processing device; and a vehicle detection device is connected with the image processing device. The imaging method includes the following steps: 1) acquiring fisheye images of the tail of a vehicle through a fisheye camera; converting the acquired fisheye images into back projection aerial images; and 3) acquiring the aerial images in the image conversion device, integrating the image of the current frame with the global image in the image storage device, and sending the integrated image to the image display device to display. The omnidirectional imaging system based on a monocular camera, and the imaging method thereof can generate an omnidirectional image by means of integration of the current image of the monocular rear-view camera and the global image, so that the generation cost is reduced and the operation efficiency is improved, and can be used in a vehicle.

13 citations


Patent
20 Jun 2016
TL;DR: In this paper, an object recognition device includes an image conversion part for setting a plurality of candidate regions having an arbitrary area to a recognition object image, a recognition calculation part for performing a recognition process for image information included in each candidate region set by the image conversion, and a map generation part for preparing a map for each object candidate based on a region position of each candidate regions.
Abstract: PROBLEM TO BE SOLVED: To provide an object recognition device, an object recognition method, and a program capable of determining a main object in an image and a position of the object with higher recognition accuracy.SOLUTION: An object recognition device includes: an image conversion part for setting a plurality of candidate regions having an arbitrary area to a recognition object image; a recognition calculation part for performing a recognition process for image information included in each candidate region set by the image conversion part and calculating a degree of certainty for each object candidate in each candidate region; the degree of certainty for each object candidate calculated by the recognition calculation part; a map generation part for preparing a map for each object candidate based on a region position of each candidate region; and a position calculation part for specifying the position of the object based on the map of the object candidate prepared by the map generation part.SELECTED DRAWING: Figure 1

12 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: A novel decolorization strategy built on image fusion principles that blends the three color channels R, G, B guided by two weight maps that filter the local transitions and measure the dominant values of the regions using the Laplacian information.
Abstract: In this paper we introduce a novel decolorization strategy built on image fusion principles. Decolorization (color-to-grayscale), is an important transformation used in many monochrome image processing applications. We demonstrate that aside from color spatial distribution, local information plays an important role in maintaining the discriminability of the image conversion. Our strategy blends the three color channels R, G, B guided by two weight maps that filter the local transitions and measure the dominant values of the regions using the Laplacian information. In order to minimize artifacts introduced by the weight maps, our fusion approach is designed in a multi-scale fashion, using a Laplacian pyramid decomposition. Additionally, compared with most of the existing techniques our straightforward technique has the advantage to be computationally effective. We demonstrate that our technique is temporal coherent being suitable to decolorize videos. A comprehensive qualitative and also quantitative evaluation based on an objective visual descriptor demonstrates the utility of our decolorization technique.

12 citations


Patent
20 May 2016
TL;DR: In this article, a frequency resolution unit performs frequency resolution of a radiographic image to generate band images representing frequency components in a plurality of frequency bands, and a synthesis unit synthesizes the converted band images to generate a processed radiographic images with converted contrast.
Abstract: A frequency resolution unit performs frequency resolution of a radiographic image to generate band images representing frequency components in a plurality of frequency bands. A reference image generation unit generates a reference image representing information associated with scattered radiation included in the radiographic image, and generates a plurality of band reference images corresponding to a plurality of frequency bands from the reference image. A band image conversion unit performs conversion between the corresponding pixels of the band reference images and the band images in the corresponding frequency bands to generate converted band images. A synthesis unit synthesizes the converted band images to generate a processed radiographic image with converted contrast.

Patent
Seung-Ho Park1, Seung-hoon Han1, Ji-Won Choi1, Ho-Cheon Wey1, Young-Su Moon1 
08 Aug 2016
TL;DR: In this paper, an electronic device and an image conversion method of the electronic device, the electronic devices comprising: a receiving unit for receiving a first image from a source device, a decoding unit for decoding brightness information of the first image, a converting unit for converting a dynamic range of the image on the basis of the decoded brightness information, using a mapping function, and a display unit for displaying a second image having the converted dynamic range on a display.
Abstract: An electronic device and an image conversion method of the electronic device, the electronic device comprising: a receiving unit for receiving a first image from a source device; a decoding unit for decoding brightness information of the first image; a converting unit for converting a dynamic range of the first image on the basis of the decoded brightness information, using a mapping function; and a display unit for displaying a second image having the converted dynamic range on a display, wherein the mapping function is a curve function including a plurality of points determined on the basis of the first image, a characteristic of a change in brightness of a display of the source device, and a characteristic of brightness of a scene of the first image. The present disclosure may generate, for example, a high dynamic range (HDR) image, which is improved from a standard dynamic range (SDR) image.

Patent
03 Feb 2016
TL;DR: In this article, the utility model discloses a multi-functional removal image processing device, including controller and display screen is connected with the controller, still equipped with the camera of being connected with controller, the camera includes filter and image sensor, the controller was received this device and was gone up the instruction that virtual button or entity button sent, controlled the image sensor who passes through the different wavelength light of filter receipt and was data signal with image conversion to handle it, then the output of the image data after will handling.
Abstract: The utility model discloses a multi -functional removal image processing device, including controller and the display screen is connected with the controller, still be equipped with the camera of being connected with the controller, the camera includes filter and image sensor, the controller was received this device and was gone up the instruction that virtual button or entity button sent, controlled the image sensor who passes through the different wavelength light of filter receipt and was data signal with image conversion to handle it, then the output of the image data after will handling. The usage of this device is extensive, satisfies different users' user demand.

Book ChapterDOI
01 Jan 2016
TL;DR: The proposed method of vehicle detection with the use of image conversion into point image representation is fast and simple computationally and can by used in real-time processing.
Abstract: This paper presents method of image conversion into point image representation. Conversion is carried out with the use of small image gradients. Layout of binary values of point image representation is in accordance with the edges of objects comprised in the source image. Vehicles are detected through analysis of the detection field state. The state of the detection field is determined on the basis of the sum of the edge points within the detection field. The proposed method of vehicle detection is fast and simple computationally. Vehicle detection with the use of image conversion into point image representation is efficient and can by used in real-time processing. Experimental results are provided.

Patent
01 Sep 2016
TL;DR: In this article, a medical image display device that can display medical images in proper gradation without using color management by hardware, and without performing modification corresponding to gradation correction to an individual application having a display function of the medical image.
Abstract: A viewer part 38b accesses an image viewer web server 14 by a user terminal 16, in which a medical image display program 38 is installed, to acquire at least medical image data 13a, input data accepted in a touch input part 21a is transmitted in an input transmission part 31 and is transferred to the image viewer web server 14, the medical image data 13a outputted from the image viewer web server 14 is acquired in a capture acquisition part 34, at least a medical image data 13a portion is subjected to image conversion in an image conversion part 35, and image data for display after conversion is outputted to a display part 23 by an output part 36. Accordingly, there is provided a medical image display device that can display a medical image in proper gradation without using color management by hardware, and without performing modification corresponding to gradation correction to an individual application having a display function of the medical image.

Patent
17 Feb 2016
TL;DR: In this paper, a B/S architecture-based radiation image inspection system is described, which includes a user end which initiates an image operation request, and directly displays an image result on the browser of the user end after obtaining a response.
Abstract: The invention provides a B/S architecture-based radiation image inspection system. The B/S architecture-based radiation image inspection system is characterized in that the B/S architecture-based radiation image inspection system includes a user end which initiates an image operation request, and directly displays an image operation result on the browser of the user end after obtaining a response, an image processing module which queries an image of which the processing is required by the user end according to the request of the user end, performs operation required by the user end on the image, converts a processing result into an image of a specified format and specified quality and returns the image to the browser of the user end for displaying the image, and a Web server which receives requests and authentication of the browser user of the user end and rejects any requests of unauthorized users, and forwards legitimate requests of authorized users to the image processing module, and returns image operation results from the image processing module to the user end. According to the system of the invention, image caching, image distributed image processing and image conversion are adopted, and therefore, the processing performance of the system of the invention is close to the processing performance of original independent software.

Patent
26 Oct 2016
TL;DR: In this article, a fuzzy rough set-based sleeping posture pressure image recognition method was proposed for real-time pressure data acquired by a pressure sensitivity sensor array located below a sleeping position.
Abstract: The invention discloses a fuzzy rough set-based sleeping posture pressure image recognition method. The method is characterized by comprising the following steps: 1, data acquisition is carried out, and real-time pressure data detected by a pressure sensitivity sensor array located below a sleeping position are acquired; 2, image conversion is carried out and the real-time pressure data acquired in the first step are converted into a pressure image; 3, image pretreatment is carried out on the pressure image obtained in the second step; 4, image feature extraction is carried out, that is, feature extraction is carried out on the pressure image through image pretreatment, and the extracted feature values form a feature set for single pressure images; and 5, a fuzzy rough set method is adopted to process the image features extracted in the fourth step for realizing sleeping posture recognition.

Journal ArticleDOI
TL;DR: The objective is to present the process performed on proprietary image formats of histological and cytological slides in pathology to convert them the be compliant with the Digital Imaging and Communication in Medicine (DICOM) standard, according to 145 and 122 supplements, and their subsequent storage in a Picture Archiving and Communication System (PACS).
Abstract: Introduction/ Background In recent years different technological solutions have emerged for the scanning or digitization of histological and cytological slides in pathology, from several manufacturers. High resolution scanning is usually based in tile (small fragment) or stripes (longitudinal areas) that are combined or stitched together to create a high magnification (usually equivalent to 20x to 40x magnification) global image. Thus, a large digital slide can be displayed using specific viewers to simulate the functions of a conventional microscope. A pyramid of images is a common solution. But each scanner manufacturer optimizes the process of collecting, managing and storing images in its own format, making difficult the interconnection between systems and the ability to share images between different formats, and, generally a heavy process of image conversion and a loss of information is needed. Aims The objective is to present the process performed on proprietary image formats of histological and cytological slides in pathology to convert them the be compliant with the Digital Imaging and Communication in Medicine (DICOM) standard, according to 145 and 122 supplements, and their subsequent storage in a Picture Archiving and Communication System (PACS). Methods Python was chosen as programming platform due to its versatility and available tools. Furthermore, for future projects, Python can be used to apply signal analysis on digital images. A Pentium Core i3 4GB RAM, 1TB server with Ubuntu 14.04.3 LTS Server was used. In the server, a developer platform Eclipse Version 3.8.1 allowed the installation of PyDev for Eclipse 4.3.0 and Eclipse Jgit 1.3.0 plugins. It has been connected to a Github repository to manage developing versions. The solutions has been structured into a Python package to obtain images coming from proprietary images to be standardized to an image compliant with DICOM supplement 145, and sending them to the PACS. Results In order to test compatibility, David A. Clunie’s dicom3tools was used. This allowed verification of all Tags of the generated files, indicating those who are required according to each image, and also it offers information of those tags where included data does not match the standard and those with values that do not correspond to that standard. This tool helped to finally obtain a result of 0% errors in all generated files. Regarding storage tests, two different PACS were used. First, in collaboration with T-systems Company, an open source dcm4chee DICOM Archive 2 (dcm4che.org), and second, in collaboration with IRE Rayos X company, a commercial IRE Store Channel 4.3 was also used in several tests. After some tests with commercial and open source PACS, we could obtain the following c onclusions: In the negotiation phase, the PACS did not recognize the predetermined configuration stated in supplement 145. The solution was changing SOP to “VL Microscopic Image Storage”. Each level of the pyramid will be stored as an instance inside the same series, and each tile as a frame inside a multi-frame object. The dcm4che.org PACS offers a WADO service that allows accessing each frame separately, inside the same object, which can be useful for the implementation of slide viewers in pathology. Acknowledgment : This work has been supported by the AIDPATH project, an EU 7FP IAPP Marie Curie action, contract number 612471

Patent
29 Jun 2016
TL;DR: In this paper, an unmanned aerial vehicle (UAV) consisting of an UAV and a monitoring device arranged on the UAV is described, the monitoring device specifically comprises a pre-processing module, a detection tracking module, and a recognition output module.
Abstract: The invention discloses an unmanned aerial vehicle based on visual characteristics. The unmanned aerial vehicle comprises an unmanned aerial vehicle and a monitoring device arranged on the unmanned aerial vehicle, the monitoring device specifically comprises a pre-processing module, a detection tracking module, and a recognition output module, the pre-processing module includes three sub-modules: an image conversion module, an image filtering module, and an image enhancement module, and the detection tracking module includes three sub-modules: a construction module, a loss discrimination module, and an updating module. According to the unmanned aerial vehicle, the video image technology is applied to the unmanned aerial vehicle, malicious damage behaviors can be effectively monitored and recorded, and the unmanned aerial vehicle is advantaged by good timeliness, accurate positioning, high adaptive capability, complete reservation of image details, and high robustness.

Patent
03 Aug 2016
TL;DR: In this paper, a pair of visualized intelligent glasses comprising a glasses body and a control system consisting of a microprocessor, a voice recognition and processing unit, a character and image conversion unit and a projection display unit are arranged on the glasses body.
Abstract: The invention discloses a pair of visualized intelligent glasses .The visualized intelligent glasses comprise a glasses body and a control system .The control system comprises a microprocessor, a voice recognition and processing unit, a character and image conversion unit, a projection image processing unit, a projector main board, a voice acquisition unit and a projection display unit, wherein the voice acquisition unit and the projection display unit are arranged on the glasses body .The voice acquisition unit acquires voice signals and transmits the voice signals to the voice recognition and processing unit .The voice recognition and processing unit recognizes the voice signals, converts the voice signals into character signals and transmits the character signals to the character and image conversion unit .The character and image conversion unit processes the character signals to generate image signals and transmits the image signals to the projection image processing unit .The projection image processing unit processes the image signals to generate video image signals and transmits the video image signals to the projector main board .The projector main board processes the video image signals to generate video images suitable for being displayed by the projection display unit, and the video images are displayed by the projection display unit .Voice is converted into the visualized images, and therefore life convenience to hearing impairment patients is improved.

Patent
24 Feb 2016
TL;DR: In this article, a method for generating a 3D printing file using two-dimensional image conversion is presented, which consists of a step of receiving 2D images in four directions including front, rear, left, and right sides including a target object, a step preprocessing the inputted 2D image in each direction to extract the target object from which noise is removed; a step correcting the extracted target object to equalize the size of the target objects in each directions, and extracting a number of pixels given a color for each vertical axis coordinate of the side images as depth information
Abstract: The present invention relates to an apparatus and a method for generating a three-dimensional printing file using two-dimensional image conversion. The method for generating a three-dimensional printing file using two-dimensional image conversion comprises: a step of receiving two-dimensional images in four directions including front, rear, left, and right sides including a target object; a step of preprocessing the inputted two-dimensional image in each direction to extract the target object from which noise is removed; a step of correcting the extracted target object to equalize the size of the target object in each direction, and extracting a number of pixels given a color for each vertical axis coordinate of the side images as depth information; a step of rendering the target object in each direction based on the depth information to generate a three-dimensional image; and a step of generating a three-dimensional printing file including shape information of the three-dimensional image by triangle mesh generation and slice operation.

Patent
26 Oct 2016
TL;DR: In this paper, a panorama imaging method and device based on a single camera was proposed, which consisted of a camera, a distortion correction module, an image splicing and updating module, a splicing module and an image conversion module.
Abstract: The invention provides a panorama imaging method and device based on a single camera The device comprises a camera, a distortion correction module, an image splicing and updating module, an image conversion module and an image processing module which are electrically connected in sequence A storage module is electrically connected with the image splicing and updating module According to the invention, only one camera is used, while four cameras are used in the prior art; and when the storage module receives a new global image, the storage module updates an old global image with the new global image, the updating speed is high, the storage space is saved, the operation efficiency is improved, and both the image around a vehicle and the image below the vehicle can be displayed, so that the car backing process is clearer, a driver can operate the vehicle more conveniently, the process is more safer, the car backing difficulty is simultaneously lowered for the driver, and the precision of the panorama car backing image is higher The panorama imaging method and device have the advantages that the structure is simple, the cost is low, the interference is small, the error is small, the usage is convenient, and the car backing safety coefficient is increased

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The paper presents a novel image data-based method of vehicle detection using a sequence of consecutive frames taken from the video-stream that detects vehicles through analysis of the state of the detection field.
Abstract: The paper presents a novel image data-based method of vehicle detection. A sequence of input images consists of consecutive frames taken from the video-stream. The video stream is obtained from the camera placed over a road. Images from the sequence of input images are converted into point representation separately. Conversion of images into point representation is performed using analysis of small image gradients. Point representation of images consists of edge points whose layout is in accordance with edge of objects contained in images before conversion. For all input images the same detection field is defined. The state of the detection field is determined on the basis of sums of the edge points calculated within the detection field. A vehicle driving through the detection field changes its state. Detection of vehicles is carried out by analysis of the state of the detection field. The experimental results are provided.

Patent
30 Jun 2016
TL;DR: In this paper, the authors proposed a tractor vehicle surrounding image generation device, which is capable of making an image in the vicinity of a segmentation region boundary in an image around a vehicle behind a vehicle to which the vehicle to be hauled is connected.
Abstract: PROBLEM TO BE SOLVED: To provide a "tractor vehicle surrounding image generation device and tractor vehicle surrounding image generation method, capable of making an image in the vicinity of a segmentation region boundary in an image around a vehicle behind a vehicle to which a vehicle to be hauled is connected, easy to watch.SOLUTION: The tractor vehicle surrounding image generation device comprises: a relative angle detection part 14 for detecting a relative angle between a tractor vehicle and a connection part when generating a vehicle surrounding image by performing viewpoint conversion from an image that is segmented from an image captured by an on-vehicle camera 1, into a bird's-eye-view image; and an image combination part 12 which performs image conversion while changing a segmentation region of the captured image in such a manner that a boundary of a region in which the connection part or the vehicle to be hauled is imaged and a boundary of the segmentation region of the captured image are matched on the basis of the detected relative angle. In a portion where the boundary of the region in which the connection part or the vehicle to be hauled is imaged and the boundary of the segmentation region of the captured image are matched, a discontinuous surface between neighboring segmentation regions is eliminated.SELECTED DRAWING: Figure 1

Patent
Jeff Sklaroff1
26 Aug 2016
TL;DR: In this article, a method for producing digital ink in a collaboration session between a first computing device and a second computing device that presents a digital canvas is presented, which includes capturing a raster image of content using a camera operably coupled to the first computing devices and deriving first image vectors and second image vectors based on first and second portions, respectively, of the image.
Abstract: A method for producing digital ink in a collaboration session between a first computing device and a second computing device that presents a digital canvas. In some embodiments, the method includes capturing a raster image of content using a camera operably coupled to the first computing device, deriving first image vectors and second image vectors based on first and second portions, respectively, of the raster image, sending the first image vectors to the second computing device for displaying a first digital ink object based on the first image vectors, and sending the second image vectors to the second computing device for displaying a second digital ink object based on the second image vectors after the displaying of the second digital ink object.

Patent
24 Aug 2016
TL;DR: In this article, a method for real-time splicing of video frames is presented, which uses a phase correlation method to compute the overlapped area of video frame images to be spliced.
Abstract: The invention discloses a method for splicing videos in real time and belongs to technical field of video image processing. The method computes the overlapped area of video frame images to be spliced by using a phase correlation method, improves a SURF (Speeded UP Robust Features) algorithm, simplifies the generation process of a characteristic point descriptor in the SURF algorithm, reduces the dimensionality of the descriptor, extracts the characteristic points of the overlapped area of the video frames by using the improved SURF algorithm. The invention provides an image registration method based on characteristic block matching to match the characteristic points, decrease calculated amount, increase computational efficiency, and quickly solve an image conversion model by using the characteristic block matching. Finally the invention provides a projection matrix updating method based on a correlation coefficient. The projection matrix updating method is used for updating a projection matrix, prevents erroneous splicing results, and fusing the video frame images so as to splice the videos in real time.

Patent
20 Oct 2016
TL;DR: In this paper, the problem of providing a radiological image conversion screen whereby sensitivity and sharpness, which are in a tradeoff relationship, are obtained at the same time, is addressed.
Abstract: The present invention addresses the problems of providing a radiological image conversion screen whereby sensitivity and sharpness, which are in a tradeoff relationship, are obtained at the same time, and of providing a high-performance flat-panel detector which includes the radiological image conversion screen The present invention resides in a radiological image conversion screen including a phosphor layer and an organic multilayer reflective film, the radiological image conversion screen characterized by including, on a reverse side of the organic multilayer reflective film from the phosphor layer, a layer having a refractive index less than that of the organic multilayer reflective film


Patent
16 Nov 2016
TL;DR: In this article, a holographic microwave imaging system consisting of a signal generating unit, a signal transmitting unit, an image conversion unit, and an image display unit is presented. But the system is not suitable for the use of a single-frequency-domain image.
Abstract: The invention relates to a holographic microwave imaging system and an imaging method thereof. The imaging system comprises a signal generating unit, a signal transmitting unit, a signal receiving unit, a signal control unit, a signal and image conversion unit and an image display unit, wherein the signal generating unit is used for transmitting a microwave signal to a to-be-detected object by virtue of the signal transmitting unit, so that a scattered electric field is formed around the to-be-detected object; the signal receiving unit is used for measuring the scattered electric field as well as information change in the to-be-detected object and in the surrounding electric field, and the signal receiving unit is used for transmitting measurement results to the signal control unit; the signal control unit is used for acquiring scattered electric field visibility distribution of the to-be-detected object and for transmitting the scattered electric field visibility distribution to the signal and image conversion unit; and the signal and image conversion unit is used for constructing a four-dimensional image of the to-be-detected object and for transmitting the four-dimensional image to the image display unit which is in charge of displaying the image. The imaging method comprises steps of transmitting the single-frequency-domain microwave signal to the to-be-detected object; receiving the scattered field; acquiring an amplitude and phase delay information; and achieving the reconstruction of the four-dimensional image.

Patent
30 Mar 2016
TL;DR: In this article, a connection equipment and method for portable Wi-Fi wireless devices, where a portable wireless device can be simply and easily accessed to a wireless network and an internet-surfing approach is provided for mobile terminal devices.
Abstract: The invention provides connection equipment and method for portable Wi-Fi wireless devices, wherein a portable Wi-Fi wireless device can be simply and easily accessed to a wireless network and an internet-surfing approach is provided for mobile terminal devices. The connection equipment for portable Wi-Fi wireless devices is characterized by comprising a camera unit for shooting and acquiring a two-dimensional code image; a two-dimensional code image conversion unit for extracting the two-dimensional code image and converting the image into the information in the character format; and a Wi-Fi connection unit used for connecting the above information in the character format with the wireless network, wherein the information is set as network parameters.