scispace - formally typeset
Search or ask a question

Showing papers on "Line segment published in 2016"


Journal ArticleDOI
Jianwei Niu1, Jie Lu1, Mingliang Xu2, Pei Lv2, Xiaoke Zhao1 
TL;DR: This work proposes a novel lane detection method, whereby each lane has two boundaries, and demonstrates the outstanding performance of the method on the challenging dataset of road images compared with state-of-the-art lane-detection methods.

152 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: A Probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.
Abstract: Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.

96 citations


Proceedings ArticleDOI
Hao Yang1, Hui Zhang1
27 Jun 2016
TL;DR: An algorithm that can automatically infer a 3D shape from a collection of partially oriented superpixel facets and line segments and is efficient, that is, the inference time for each panorama is less than 1 minute.
Abstract: We propose a method to recover the shape of a 3D room from a full-view indoor panorama. Our algorithm can automatically infer a 3D shape from a collection of partially oriented superpixel facets and line segments. The core part of the algorithm is a constraint graph, which includes lines and superpixels as vertices, and encodes their geometric relations as edges. A novel approach is proposed to perform 3D reconstruction based on the constraint graph by solving all the geometric constraints as constrained linear least-squares. The selected constraints used for reconstruction are identified using an occlusion detection method with a Markov random field. Experiments show that our method can recover room shapes that can not be addressed by previous approaches. Our method is also efficient, that is, the inference time for each panorama is less than 1 minute.

91 citations


Proceedings ArticleDOI
16 May 2016
TL;DR: Evaluation using the KITTI dataset shows that the method outperforms publicly available and commonly used state-of-the-art method GICP for point cloud registration in both accuracy and speed, especially in cases where the scene lacks significant landmarks or in typical urban elements.
Abstract: We present a novel way of odometry estimation from Velodyne LiDAR point cloud scans. The aim of our work is to overcome the most painful issues of Velodyne data - the sparsity and the quantity of data points - in an efficient way, enabling more precise registration. Alignment of the point clouds which yields the final odometry is based on random sampling of the clouds using Collar Line Segments (CLS). The closest line segment pairs are identified in two sets of line segments obtained from two consequent Velodyne scans. From each pair of correspondences, a transformation aligning the matched line segments into a 3D plane is estimated. By this, significant planes (ground, walls, …) are preserved among aligned point clouds. Evaluation using the KITTI dataset shows that our method outperforms publicly available and commonly used state-of-the-art method GICP for point cloud registration in both accuracy and speed, especially in cases where the scene lacks significant landmarks or in typical urban elements. For such environments, the registration error of our method is reduced by 75% compared to the original GICP error.

81 citations


Journal ArticleDOI
Kai Li1, Jian Yao1, Xiaohu Lu1, Li Li1, Zhichao Zhang 
TL;DR: Experiments demonstrate the superiorities of the proposed method to the state-of-the-art methods for its robustness in more difficult situations, the larger amounts of correct matches, and the higher accuracy in most cases.

67 citations


01 Jan 2016
TL;DR: In this paper, an advanced method for statistical analysis of correlations of marks of spatial marked point processes is presented, which is for example applicable in the study of orientations of objects in spatial patterns.
Abstract: This paper presents an advanced method for statistical analysis of correlations of marks of spatial marked point processes. This extension of mark correlation is for example applicable in the study of orientations of objects in spatial patterns. The approach is applied to line segment processes which can be described by marked point processes. As an example a pattern of biological origin is studied. In connection with this, models of line segment processes with non-intersecting segments are suggested.

39 citations


Journal ArticleDOI
TL;DR: Computationally efficient methods for estimating chord length distributions and lineal path functions for 2D and 3D microstructure images defined on any number of arbitrary chord orientations are described.
Abstract: Chord length distributions (CLDs) and lineal path functions (LPFs) have been successfully utilized in prior literature as measures of the size and shape distributions of the important microscale constituents in the material system. Typically, these functions are parameterized only by line lengths, and thus calculated and derived independent of the angular orientation of the chord or line segment. We describe in this paper computationally efficient methods for estimating chord length distributions and lineal path functions for 2D (two dimensional) and 3D microstructure images defined on any number of arbitrary chord orientations. These so called fully angularly resolved distributions can be computed for over 1000 orientations on large microstructure images (5003 voxels) in minutes on modest hardware. We present these methods as new tools for characterizing microstructures in a statistically meaningful way.

37 citations


Proceedings ArticleDOI
13 Jun 2016
TL;DR: The proposed Event-based Line Segment Detector (ELiSeD) is a step towards solving the general solution of the so-called event correspondence problem by parameterizing the event stream as a set of line segments.
Abstract: Event-based temporal contrast vision sensors such as the Dynamic Vison Sensor (DVS) have advantages such as high dynamic range, low latency, and low power consumption. Instead of frames, these sensors produce a stream of events that encode discrete amounts of temporal contrast. Surfaces and objects with sufficient spatial contrast trigger events if they are moving relative to the sensor, which thus performs inherent edge detection. These sensors are well-suited for motion capture, but so far suitable event-based, low-level features that allow assigning events to spatial structures have been lacking. A general solution of the so-called event correspondence problem, i.e. inferring which events are caused by the motion of the same spatial feature, would allow applying these sensors in a multitude of tasks such as visual odometry or structure from motion. The proposed Event-based Line Segment Detector (ELiSeD) is a step towards solving this problem by parameterizing the event stream as a set of line segments. The event stream which is used to update these low-level features is continuous in time and has a high temporal resolution; this allows capturing even fast motions without the requirement to solve the conventional frame-to-frame motion correspondence problem. The ELiSeD feature detector and tracker runs in real-time on a laptop computer at image speeds of up to 1300 pix/s and can continuously track rotations of up to 720 deg/s. The algorithm is open-sourced in the jAER project.

31 citations


Journal ArticleDOI
15 Apr 2016-Sensors
TL;DR: The experimental results demonstrate that the proposed saliency-based SSL detection method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.
Abstract: Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs) to detect the sea-sky line (SSL) accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF). In the end, the proposed method is tested on a benchmark dataset from the “XL” USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.

31 citations


Journal ArticleDOI
Chunyang Zhao1, Huaici Zhao, Jinfeng Lv1, Shijie Sun1, Bo Li1 
TL;DR: Experimental results indicate that the proposed Multimodality Robust Line Segment Descriptor (MRLSD) matching method achieves higher precision and repeatability than several state-of-the-art local feature-based multimodality matching methods, and also demonstrates its robustness to multimodal images.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered a gradient estimate for a conductivity problem whose inclusions are two neighboring insulators in three dimensions and established upper and lower bounds of gradient on the shortest line segment between two insulating unit spheres in 3D.

Journal ArticleDOI
TL;DR: In this paper, a θ-type integral of Havelock form Green's function distributing on a horizontal line segment is obtained, as a result, both the order of infinity multiplier and the amplitude of oscillation function decrease, and the term with infinity number vanishes as well.

Journal ArticleDOI
TL;DR: The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features.

Journal ArticleDOI
12 Aug 2016-Sensors
TL;DR: This model incorporates prior spatial-temporal knowledge with lane appearance features to jointly identify lane boundaries to address the challenges of structure variation, large noise and complex illumination.
Abstract: Lane boundary detection technology has progressed rapidly over the past few decades. However, many challenges that often lead to lane detection unavailability remain to be solved. In this paper, we propose a spatial-temporal knowledge filtering model to detect lane boundaries in videos. To address the challenges of structure variation, large noise and complex illumination, this model incorporates prior spatial-temporal knowledge with lane appearance features to jointly identify lane boundaries. The model first extracts line segments in video frames. Two novel filters—the Crossing Point Filter (CPF) and the Structure Triangle Filter (STF)—are proposed to filter out the noisy line segments. The two filters introduce spatial structure constraints and temporal location constraints into lane detection, which represent the spatial-temporal knowledge about lanes. A straight line or curve model determined by a state machine is used to fit the line segments to finally output the lane boundaries. We collected a challenging realistic traffic scene dataset. The experimental results on this dataset and other standard dataset demonstrate the strength of our method. The proposed method has been successfully applied to our autonomous experimental vehicle.

Proceedings ArticleDOI
Kai Li1, Jian Yao1, Mengsheng Lu1, Yuan Heng1, Teng Wu1, Yinxuan Li1 
07 Mar 2016
TL;DR: A benchmark which provides the ground truth matches among 30 pairs of line segment sets extracted from 15 representative image pairs using two state-of-the-art line segment detectors is introduced and evaluated some of the existing LSM methods.
Abstract: As the vital procedure for exploiting line segments extracted from images for solving computer vision problems, Line Segment Matching (LSM) has received growing attentions from researcher in recent years, and a considerable number of methods have been proposed. However, no one has attempted to solve two major problems in this area. The first is how to evaluate different methods in an unbiased way. All proposed methods were evaluated using images and line segment detectors selected by the authors themselves, making the conclusions based on the somewhat biased experiments less convincing. The second problem is that there is no reliably automatic way to access the correctness of obtained line segment matches, which can often be up to hundreds in quantity. Checking them one by one by visual inspection is the only reliable, but very tedious and error-prone way. In this paper, we target to solve the two problems. We introduce a benchmark which provides the ground truth matches among 30 pairs of line segment sets extracted from 15 representative image pairs using two state-of-the-art line segment detectors. With the benchmark, we evaluated some of the existing LSM methods.

Patent
24 Aug 2016
TL;DR: In this paper, a lane line detection method under a complex road condition scene is presented. But the method is used for a smart car to obtain road image information and instantly conduct lane line identification, which includes scanning the edge image after the edge information of a road condition image is obtained, and calculating the connecting direction of edge pixel points to filter the abnormal noise edge of connecting direction.
Abstract: The invention discloses a lane line detection method under a complex road condition scene. The method is used for a smart car to obtain road image information and instantly conduct lane line identification. The method includes scanning the edge image after the edge information of a road condition image is obtained, and calculating the connecting direction of edge pixel points to filter the abnormal noise edge of the connecting direction. The characteristic information of the disappearing point of a lane line can be used, and the position information of the disappearing point can be obtained through a ballot mechanism. The disappearing point serves as the constraint conditions for filtering interference line segment, and also as the key parameters for fitting a lane line. The method eliminates the influence of interference factors of tree shadows, characters and driving vehicles on the pavement, the lane line detection under various complex environments can be realized, and high accuracy and good robustness are ensured.

Posted Content
TL;DR: Several classes of intersection graphs of line segments in the plane are considered and it is shown that not every intersection graph of rays is an intersections graph of downward rays, and not every outer segment graph is an intersectiongraph of rays.
Abstract: We consider several classes of intersection graphs of line segments in the plane and prove new equality and separation results between those classes. In particular, we show that: (1) intersection graphs of grounded segments and intersection graphs of downward rays form the same graph class, (2) not every intersection graph of rays is an intersection graph of downward rays, and (3) not every intersection graph of rays is an outer segment graph. The first result answers an open problem posed by Cabello and Jejcic. The third result confirms a conjecture by Cabello. We thereby completely elucidate the remaining open questions on the containment relations between these classes of segment graphs. We further characterize the complexity of the recognition problems for the classes of outer segment, grounded segment, and ray intersection graphs. We prove that these recognition problems are complete for the existential theory of the reals. This holds even if a 1-string realization is given as additional input.

Journal ArticleDOI
23 Jun 2016-Sensors
TL;DR: A novel line space voting method for fast vanishing-point detection that can meet the real-time requirements of navigation for autonomous mobile robots and unmanned ground vehicles is presented.
Abstract: Vanishing-point detection is an important component for the visual navigation system of an autonomous mobile robot. In this paper, we present a novel line space voting method for fast vanishing-point detection. First, the line segments are detected from the road image by the line segment detector (LSD) method according to the pixel’s gradient and texture orientation computed by the Sobel operator. Then, the vanishing-point of the road is voted on by considering the points of the lines and their neighborhood spaces with weighting methods. Our algorithm is simple, fast, and easy to implement with high accuracy. It has been experimentally tested with over hundreds of structured and unstructured road images. The experimental results indicate that the proposed method is effective and can meet the real-time requirements of navigation for autonomous mobile robots and unmanned ground vehicles.

Journal ArticleDOI
TL;DR: Experiments with images from different types of satellite datasets demonstrate that the proposed algorithm is automatic, fast (4 ms faster than the second fastest method, i.e., the rotation- and scale-invariant shape context) and can achieve a recall of 79.7% on average for remote sensing images with large background variations.
Abstract: Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood). Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges) is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines), which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is automatic, fast (4 ms faster than the second fastest method, i.e., the rotation- and scale-invariant shape context) and can achieve a recall of 79.7%, a precision of 89.1% and a root mean square error (RMSE) of 1.0 pixels on average for remote sensing images with large background variations.

Proceedings ArticleDOI
Truc Le1, Ye Duan1
19 Aug 2016
TL;DR: CDBD is created, the first benchmark dataset for circle detection with ground truth circles labeled by human, which will establish standard quantitative results in future research regarding circle detection.
Abstract: Circle detection from digital images is a necessary operation in many robotics and computer vision tasks to facilitate shape and object recognition. We propose and analyze a novel method, based on line segment detection and circle completeness verification, to detect circles in images. The key idea is to use line segments instead of raw edge pixels to get the circle candidates followed by a verification step to measure the circle's completeness. Experimental results on several synthesized and hand-sketched as well as natural images with various complication favor the accuracy, robustness and efficiency of our approach against other well-known techniques. Our method can deal with incomplete, cocentric, discontinuous and occluded circles with noise and deformation. Moreover, in this paper, we create CDBD, the first benchmark dataset for circle detection with ground truth circles labeled by human, which will establish standard quantitative results in future research regarding circle detection.

Journal ArticleDOI
26 Jan 2016
TL;DR: In this paper, the authors presented a method for image-based navigation from an image memory using line segments as landmarks, where the entire navigation process is based on 2D image information without using any 3D information at all.
Abstract: This letter presents a method for image-based navigation from an image memory using line segments as landmarks. The entire navigation process is based on 2-D image information without using any 3-D information at all. The environment is represented by a set of reference images with overlapping landmarks, which are acquired during a prior learning phase. These reference images define the path to follow during the navigation. The switching of reference images is done exploiting the line segment matching between the current acquired image and nearby reference images. Three view matching result is used to compute the rotational velocity of a mobile robot during its navigation by visual servoing. Real-time navigation has been validated inside a corridor and inside a room with a Pioneer 3-DX equipped with an on-board camera. The obtained results confirm the viability of our approach, and verify that accurate mapping and localization are not necessary for a useful indoor navigation as well as that line segments are better features in the structured indoor environment.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: The proposed multiscale extension of a well-known line segment detector, LSD is shown to be much less prone to over-segmentation, more robust to low contrast and less sensitive to noise, while keeping the parameterless advantage of LSD and still being fast.
Abstract: We propose a multiscale extension of a well-known line segment detector, LSD. We show that its multiscale nature makes it much less prone to over-segmentation, more robust to low contrast and less sensitive to noise, while keeping the parameterless advantage of LSD and still being fast. Moreover, we show that in scenes with little or no feature points, but where it is however possible to perform structure from motion from matched line segments, the accuracy is significantly improved. This provides an objective and automatic quantitative assessment of our detector that goes much beyond the usual qualitative visual inspection found in the literature.

Journal ArticleDOI
TL;DR: A fast and easy-to-implement algorithm of plane segmentation based on cross-line element growth (CLEG) that can produce accurate segmentation at a speed of 6 s per 3 million points and exhibits good accuracy.
Abstract: Plane segmentation is an important step in feature extraction and 3D modeling from light detection and ranging (LiDAR) point cloud. The accuracy and speed of plane segmentation are two issues difficult to balance, particularly when dealing with a massive point cloud with millions of points. A fast and easy-to-implement algorithm of plane segmentation based on cross-line element growth (CLEG) is proposed in this study. The point cloud is converted into grid data. The points are segmented into line segments with the Douglas-Peucker algorithm. Each point is then assigned to a cross-line element (CLE) obtained by segmenting the points in the cross-directions. A CLE determines one plane, and this is the rationale of the algorithm. CLE growth and point growth are combined after selecting the seed CLE to obtain the segmented facets. The CLEG algorithm is validated by comparing it with popular methods, such as RANSAC, 3D Hough transformation, principal component analysis (PCA), iterative PCA, and a state-of-the-art global optimization-based algorithm. Experiments indicate that the CLEG algorithm runs much faster than the other algorithms. The method can produce accurate segmentation at a speed of 6 s per 3 million points. The proposed method also exhibits good accuracy.

Journal ArticleDOI
TL;DR: The straightening algorithm was generalized to a more interesting case where the agent dynamics obeys second-order differential equations or, stated differently, it is the agent’s acceleration (or the force applied to it) that is the control.
Abstract: Consideration was given to a special problem of controlling a formation of mobile agents, that of uniform deployment of several identical agents on a segment of the straight line. For the case of agents obeying the first-order dynamic model, this problem seems to be first formulated in 1997 by I.A. Wagner and A.M. Bruckstein as "row straightening." In the present paper, the straightening algorithm was generalized to a more interesting case where the agent dynamics obeys second-order differential equations or, stated differently, it is the agent's acceleration (or the force applied to it) that is the control.

Proceedings ArticleDOI
27 Jul 2016
TL;DR: This paper proposes an efficient lane markings detection and tracking method, which utilizes line segments as feature information combined with vanishing point constraints, and a Kalman filter is utilized for lane markings tracking.
Abstract: This paper proposes an efficient lane markings detection and tracking method, which utilizes line segments as feature information combined with vanishing point constraints. The method can be highlighted in four items as follows. Firstly, the region of interest (ROI) of road image is determined, and then the edge information is extracted by Canny edge detector. Secondly, edge image is scanned to calculate the orientation of edge-linking pixels and we filter out noise edges which have abnormal orientation. Then, line segments detected by Progressive Probabilistic Hough Transform (PPHT) are applied to analyze the structural information of lanes and the interferential line segments are eliminated under vanishing point constrains. Finally, K-means clustering algorithm is used to classify and fit the closest two lane markings. Specifically, a Kalman filter is utilized for lane markings tracking. The experimental results demonstrate the good accuracy and robustness of our method in various complex environment.

Proceedings ArticleDOI
07 Mar 2016
TL;DR: The proposed method that matches points and line segments jointly on wide-baseline stereo images with superiority to both some famous point and line segment matching methods is presented and can make it easier for 3D line segment reconstruction.
Abstract: This paper presents an method that matches points and line segments jointly on wide-baseline stereo images. In both two images to be matched, line segments are extracted and those spatially adjacent ones are intersected to generate V-junctions. To match V-junctions from the two images, we extract for each of them an affine and scale invariant local region and describe it with SIFT. The putative V-junction matches obtained from evaluating their description vectors are refined subsequently by the epipolar line constraint and topological distribution constraint among neighbor V-junctions. Since once a pair of V-junctions are matched, the two pairs of line segments forming them are matched accordingly. A part of line segments from the two images are therefore matched along with V-junction matches. To get more line segment matches, we further match those left unmatched line segments by the local homographies estimated from their adjacent V-junction matches. Experiments verify the robustness of the proposed method and its superiority to both some famous point and line segment matching methods on wide-baseline stereo images. In addition, we also show the proposed method can make it easier for 3D line segment reconstruction.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: In this work, a method for understanding a room from a single spherical image, i.e., reconstructing and identifying structural planes forming the ceiling, the floor, and the walls in a room is proposed, using graph cuts to identify segments forming boundaries, from which the planes in 3D are estimated.
Abstract: We propose a method for understanding a room from a single spherical image, i.e., reconstructing and identifying structural planes forming the ceiling, the floor, and the walls in a room. A spherical image records the light that falls onto a single viewpoint from all directions and does not require correlating geometrical information from multiple images, which facilitates robust and precise reconstruction of the room structure. In our method, we detect line segments from a given image, and classify them into two groups: segments that form the boundaries of the structural planes and those that do not. We formulate this problem as a higher-order energy minimization problem that combines the various measures of likelihood that one, two, or three line segments are part of the boundary. We minimize the energy with graph cuts to identify segments forming boundaries, from which we estimate structural the planes in 3D. Experimental results on synthetic and real images confirm the effectiveness of the proposed method.

Patent
29 Jun 2016
TL;DR: In this paper, a continuous trajectory planning transition method for a robot tail end is presented, which consists of following steps that firstly, a first line segment and a second line segment which need to be subjected to continuous trajectory-planning transition are determined, demonstration points of a connecting line segment are determined and the transition distance between the demonstration points and the first-line segment and the second-line segments are determined; and thirdly, an amplitude coefficient, a phase coefficient and a speed zooming coefficient are calculated in a coordinate axis, and the amplitude coefficients, the phase coefficient
Abstract: The invention discloses a continuous trajectory planning transition method for a robot tail end. The method comprises following steps that firstly, a first line segment and a second line segment which need to be subjected to continuous trajectory planning transition are determined, demonstration points of a connecting line segment are determined, and the transition distance between the demonstration points and the first line segment and the transition distance between the demonstration points and the second line segment are determined; secondly, according to the transition distance between the demonstration points and the first line segment, first transition joint points on the first line segment are determined, and according to the transition distance between the demonstration points and the second line segment, second transition joint points on the second line segment are determined; and thirdly, an amplitude coefficient, a phase coefficient and a speed zooming coefficient are calculated in a coordinate axis, and the amplitude coefficient, the phase coefficient and the speed zooming coefficient are brought into a limited term sine position planning function to determine a transition curve expression between the first transition joint points and the second transition joint points. The algorithm flow is clear, the calculating time is greatly shortened, and the complex degree of a robot control system is reduced.

Patent
30 Mar 2016
TL;DR: In this paper, a method for acquiring the stroke drawing order of a Chinese ink-wash painting is presented, which can be automatically obtained according to a drawn Chinese inkwash painting.
Abstract: The present invention discloses an image processing method and an image processing device, belonging to the field of image processing The image processing method comprises: acquiring an image which includes a plurality of strokes of a Chinese ink-wash painting; acquiring a plurality of backbone strokes; acquiring a plurality of angular points of each backbone stroke according to the backbone strokes; determining the endpoints of the backbone strokes according to the relative position relation among the plurality of the angular points; establishing the topological structure of the Chinese ink-wash painting according to the endpoint of each backbone stroke, the topological structure consisting of edge line segments with respect to each backbone stroke; calculating the weight of each edge line segment according to the position of each edge line segment; determining the drawing order of the backbone strokes in the image according to the position and the weight of each edge line segment in the topological structure The present invention provides a method for acquiring the stroke drawing order of a Chinese ink-wash painting Through the method for acquiring the stroke drawing order of a Chinese ink-wash painting, the drawing order of strokes may be automatically obtained according to a drawn Chinese ink-wash painting

Journal ArticleDOI
TL;DR: To automatically recover the camera parameters linearly, this paper presents a robust homography optimization method based on the edge model by redesigning the classic 3D tracking approach.
Abstract: In order to make the general user take vision tasks more flexibly and easily, this paper proposes a new solution for the problem of camera calibration from correspondences between model lines and their noisy image lines in multiple images. In the proposed method the common planar items in hand with the standard size and structure are utilized as the calibration objects. The proposed method consists of a closed-form solution based on homography optimization, followed by a nonlinear refinement based on the maximum likelihood approach. To automatically recover the camera parameters linearly, we present a robust homography optimization method based on the edge model by redesigning the classic 3D tracking approach. In the nonlinear refinement procedure, the uncertainty of the image line segment is encoded in the error model, taking the finite nature of the observations into account. By developing the new error model between the model line and image line segment, the problem of the camera calibration is expressed in the probabilistic formulation. Simulation data is used to compare this method with the widely used planar pattern based method. Actual image sequences are also utilized to demonstrate the effectiveness and flexibility of the proposed method.