scispace - formally typeset

Posted ContentDOI

Automated Tire Visual Inspection Based on Low Rank Matrix Recovery

20 Nov 2020-

TL;DR: An automated tire visual inspection system is proposed based on low rank matrix recovery based on multi-core processor distributed computing which is promising in false alarm, robustness and running time and can be extended to other real-time industrial applications.
Abstract: Visual inspection is a challenging and widely employed process in industries. In this work, an automated tire visual inspection system is proposed based on low rank matrix recovery. Deep Network is employed to perform texture segmentation which benefits low rank decomposition in both quality and computational efficiency. We propose a dual optimization method to improve convergence speed and matrix sparsity by incorporating the improvement of the soft-threshold shrinkage operator by the weight matrix M. We investigated how incremental multiplier affects the decomposition accuracy and the convergence speed of the algorithm. On this basis, image blocks were decomposed into low-rank matrix and sparse matrix in which defects were separated. Comparative experiments have been performed on our dataset. Experimental results validate the theoretical analysis. The method is promising in false alarm, robustness and running time based on multi-core processor distributed computing. It can be extended to other real-time industrial applications.

Content maybe subject to copyright    Report

Automated Tire Visual Inspection Based on Low
Rank Matrix Recovery
Guangxu Li
Qingdao University of Science and Technology
Zhouzhou Zheng
Qingdao University of Science and Technology
Yuyi Shao
Qingdao University of Science and Technology
Jinyue Shen
Qingdao University of Science and Technology
Yan Zhang ( zy@qust.edu.cn )
Qingdao University of Science and Technology
Research Article
Keywords: visual inspection, low rank matrix recovery, weight matrix
Posted Date: November 20th, 2020
DOI: https://doi.org/10.21203/rs.3.rs-109309/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License. 
Read Full License

Guangxu Li
1
, Zhouzhou Zheng
1
, Yuyi Shao
1
, Jinyue Shen
1
, Yan Zhang
1,
*
1
College of Mechanical and Electrical Engineering, Qingdao University of Science and Technology, Qingdao 266061, China
Corresponding author: Yan Zhang (e-mail: zy@qust.edu.cn)
Abstract
Visual inspection is a challenging and widely employed process in industries. In this work, an automated tire visual inspection
system is proposed based on low rank matrix recovery. Deep Network is employed to perform texture segmentation which benefits
low rank decomposition in both quality and computational efficiency. We propose a dual optimization method to improve
convergence speed and matrix sparsity by incorporating the improvement of the soft-threshold shrinkage operator by the weight
matrix M. We investigated how incremental multiplier affects the decomposition accuracy and the convergence speed of the
algorithm. On this basis, image blocks were decomposed into low-rank matrix and sparse matrix in which defects were separated.
Comparative experiments have been performed on our dataset. Experimental results validate the theoretical analysis. The method
is promising in false alarm, robustness and running time based on multi-core processor distributed computing. It can be extended
to other real-time industrial applications.
1. Introduction
Tires, as one of the most important parts of cars that withstand the friction and weight, are crucial for driving safety. Due to the
use of unclean raw materials and inaccurate production equipment in the tire manufacturing process, defects such as foreign matter
incorporation, air bubbles, cracking of the texture, etc. occur in the tires. Accordingly, in tire manufacturing industry,
nondestructive testing via automated visual inspection system based on machine vision is an indispensable stage in the production
processes. However, many tire manufacturers still use naked-eye detection in tire radiographic images obtained by the vision
system. This inspection has a low detection efficiency, low accuracy and is sensitive to the tiredness of the inspectors. Thus,
inspection has become one of the technical bottlenecks of intelligent manufacturing. The tire industry is experiencing an urgent
need of automated, non-destructive visual inspection systems.
To date, a variety of automated visual inspection methods were investigated and proposed for different industrial applications,
such as fabrics, weld IC, rail surface and strip steel [1-3] etc. However, few studies have investigated the automated tire defect
visual inspection problem. In recent years, the problem has been attracting significant attention in academic and industry.
Normally, the problems for automated tire inspection fall into the following categories including statistical approach [4], spectral
approach [5-6], learning approach [7-9], structural approach [10], and other hybrid methods [11]. In [4], Zhao and Qin proposed a
detection method using local inverse difference moment features and received good performance except that only foreign object
defects were tested. Guo et al. [10] proposed a tire defect detection method based on weighted texture difference. Feature similarity
was used to capture the texture distortion of each pixel by weighted averaging the dissimilarity between the pixel and its
neighborhoods. The method can automatically detect texture and foreign body defects in tread and sidewall with a detection
accuracy of 85% and 93.3% respectively. Due to its complexity, the cost in computational time of the method is high. Li [12]
proposed a radial tire defect detection in radiographic images based on fuzzy edge detection method. In [13], a dictionary
representation-based tire defect detection algorithm was proposed. The distribution of representation coefficients was used as a
discrimination criterion to detect defects.
In previous research, researchers have studied tire defect detection problem utilizing wavelet multiscale analysis [5], edge
detection [6], total variation image decomposition [11] and deep learning [7-9] in tire radiographic or laser shearography images.
Wavelet multi-scale analysis based method [5] can segment defective edge and normal textures by computing the optimal scale
and threshold parameters and reached a satisfactory detection accuracy for foreign object defects. In [6], the authors combine
curvelet transform and Canny edge operator for tire inspection on laser shearography images. To eliminate the influence of
anisotropic multi-texture in tire radiographic images, in [11] a total variation based method was proposed to decompose tire
radiographic images into texture and cartoon components such that foreign bodies and bubble defects can be detected easily.
However, these methods tend to be sensitive to the choice of selected parameters for different types of tires and detection
applications.
Deep learning techniques have demonstrated remarkable effectiveness in solving image classification, text recognition etc. and
have been utilized in visual inspection systems recently [14]. A concise semantic segmentation network (Concise-SSN) is proposed
[7], which realizes defect segmentation including Belt-Joint-Open and Foreign-Matter. In [8], tire defect classification and end-to-
end defect detection problem was investigated based on convolutional neural networks (CNNs). Deep learning models rely highly
on a large number of samples to obtain high precision. He Kaiming et al. [15] promoted the Faster R-CNN network model towards
Automated Tire Visual Inspection Based on Low Rank
Matrix Recovery

real-time object detection with Region Proposal Networks, which greatly accelerated the efficiency of deep learning object
detection networks and was widely used in object detection. Al Arif et al. [16] Proposed a fully automatic U-Net-based framework
for segmenting the cervical spine in X-ray images and received satisfactory segmentation accuracy. It is undeniable that deep
learning methods for the purpose of object detection or image segmentation inevitably have limitations. Designing a suitable DNN
architecture for a given problem continues to be a challenging task. To design a deep learning model with empirically selected
hyper-parameters from scratch for a specific application can be a very complicated and tedious process that is prohibitively
expensive in terms of computational resources and time. It is also worth noting that deep network approaches could be lack of
necessary flexibility and robustness for industrial applications.
In many tire manufacturing companies, tire visual inspection tasks are still carried out by human operators [5]. There are two
main reasons for this. Firstly, unlike inspection problems for fabrics, paper and rail surface etc. in which the background textures
are usually unified, tire defect detection is particularly challenging due to the large number of defect categories and complex
anisotropic textures. Tire defects, like small crack or bubble that hidden in tread can hardly be detected using existing techniques.
Compared with other visual inspection applications listed above, a fast, simple, robust and automated tire visual inspection is
remaining a challenging research topic.
Low rank matrix recovery (LRMR) has been proved to have good application prospects in image decomposition and target
detection. Yang et al. [17] proposed a block-based RPCA robust moving object detection method in which inexact Lagrangian
augmented multipliers (IALM) method was used to obtain satisfying foreground detection. Cao et al. [18] used the optimal
threshold method and low rank representation to achieve automatic segmentation of white blood cells. Peng Li et al. [19] proposed
a fabric defect detection method based on improved low rank representation (LRR) algorithm. The method is simple, accurate and
with less restrictive conditions.
Tire radiographic image consist of a basic pattern. The structural information of the image matrix has strong correlation such
that they can be mapped into a lower dimensional linear subspace which can be expressed by fewer linear independent vectors. In
addition, tire defects, foreign matters for example, are mostly small objects that are mis-incorporated into the rubber. Such objects
occupy a small portion of the image matrix, and appear as black bars in tire radiographic images. They have weak correlation,
deviate from the low-rank subspace, and would be decomposed into the sparse matrix image as noise. Texture defects mainly
include texture breakage, texture spacing, texture bending, etc., which are significantly different from normal tire texture in terms
of direction, spacing and shape. As a result, tire radiographic images are of low rank and therefore can be decomposed by LRMR
into low rank background matrices and sparse matrices, wherein the sparse matrices include weakly correlated components in the
original matrix such as noise and possible defects. Compared with hand-crafted approaches that rely on a selection of pre-defined
features and parameters, LRMR based method can perform robust decomposition and detection.
When performing low rank decomposition on tire X-ray image, the algorithm converges very slowly, about 260s per image. On
the other hand, noises in the sparse matrix also make it difficult to detect defects. In this context and in order to alleviate the above
mentioned difficulties, in this work we use semantic texture segmentation to improve the low rank characteristics of tire images,
and improve the representative Inexact Augmented Lagrange Multiplier (IALM) algorithm [20] in LRMR problem in convergence
speed and matrix sparsity.
We make three specific contributions in this paper. Firstly, we advocate the use of an improved soft threshold operator method
over the conventional IALM method for noise suppression. We argue that noise suppression in the sparse matrix can be obtained
by applying the approach to the objective as a whole. Secondly, we demonstrate that dual optimization in convergence speed and
matrix sparsity can be solved efficiently and explicitly, outperforming that using conventional algorithms, by incorporating the
improvement of the soft-threshold shrinkage operator by the weight matrix M. Finally, we present a semantic texture segmentation
model based on SegNet [21] for image rank reduction. As a result, we demonstrate an improvement in the decomposition quality
and detection accuracy of the proposed method. The running time of the model can satisfy real world applications.
This paper is organized as follows. The theoretical foundations of the low rank matrix recovery and the proposed scheme are
presented in Section 2. In Section 3, experiments are described with the automated tire visual inspection system. Experimental
results are presented and discussed in Section 4 including a comparison results of state-of-the-art methods. Finally, conclusions
are delivered in Section 5.
2. Theory and algorithm
2.1. Matrix low rank decomposition
Given a matrix with low rank characteristics and some severely damaged elements, LRMR refers to the problem of how to
restore the original matrix by automatically identifying the damaged elements. At the same time, the severely damaged elements
only account for a small part of the original matrix, that is, the noise is sparse but the value can be arbitrary. The low rank matrix
restoration assumes that the image background is located in a low-dimensional subspace, approximated by a low rank matrix. The
significant target deviates from the low-rank subspace as noise and is represented by a sparse matrix. Hence, the task of recovering
low-rank and sparse components [22], which is a non-deterministic polynomial (NP-Hard) problem, can be accurately
accomplished in the probabilistic sense via solving the following nuclear-norm and l
1
-norm-involved convex relaxation problem.

As a generalization of signal sparse representation in compressed sensing, different models have been developed to perform
LRMR in the literature. It is mainly composed of three types of models: Robust Principal Components Analysis (RPCA) [23],
Matrix Completion (MC) [24] and Low-Rank Representation (LRR) [25]. Among them RPCA represents the data matrix as the
sum of the low rank matrix and the sparse noise matrix, and then recovers the low rank matrix by solving the kernel norm
optimization problem. The model has been widely studied and applied in numerous applications, such as video surveillance, image
alignment, graph clustering, covariance estimation, latent semantic index and low rank texture etc. The matrix low rank
decomposition model can be defined as
(1)
where D denotes a corrupted high-dimensional matrix,

, A is a low rank matrix representing the image background and,
E is a sparse matrix representing the corrupted portion or a noise portion. Equation (1) is a two-objective optimization problem.
After introducing the regularization parameter, it can be transformed into the following optimization problem
󰇛󰇜 
  (2)
where
represents the zero-order norm of the matrix E, that is the number of non-zero elements in the matrix E. is a
regularization parameter, usually 
󰇛󰇜
, in which m and n are the number of rows and columns of the matrix D.
The above optimization problem is a non-deterministic polynomial (NP-Hard) problem. Since the nuclear norm of a matrix is the
envelop of the matrix rank, the
-norm of the matrix is a convex hull of the zero-order norm. Equation (2) is generally relaxed
into a convex optimization problem.


  (3)
where
represents the kernel norm of A, that is the sum of the singular values of A.
represents the l
1
-norm. Due to convex
optimization by principal component tracking can accurately recover low rank and sparse components in the matrix, this
optimization is referred to as Robust PCA (RPCA). This principal component pursuit (PCP) approach [26] shown in (3) recovers
the low-rank and the sparse matrices.
2.2. Weighted contraction IALM algorithm
The matrix low rank sparse decomposition process involves a large number of singular values which result in a large amount of
data computation and a slow convergence speed. In real world applications, a typical tire radiographic image for example, is a
large digital matrix, such that acceleration strategies are essential to ensure low computational complexity and acceptable accuracy
at the same time.
To solve these problems, researchers have recently proposed a large variety of algorithms [27] based on the PCP model, such
as Robust Subspace Learning (RSL), Stable Principal Component Pursuit (SPCP), Augmented Lagrange Multiplier (ALM),
Inexact Augmented Lagrange Multiplier (IALM), Accelerated Proximal Gradient (APG), Alternating Direction Method (ADM),
and Templates for First-Order Conic Solvers (TFOCS), Bayesian Framework (BRPCA) etc.
IT (Iterative thresholding) algorithm [28] is effective for l
1
-norm minimization problem, but convergence is very slow. In IT
algorithm, the RPCA problem (3) is converted to





 (4)
By introducing the Lagrange multiplier Y to eliminate the equality constraint, we get the Lagrangian function of (4) as
󰇛

󰇜


 (5)
The IT algorithm is simple and provably correct; however, it requires a large number of iterations to converge and, it is difficult
to select step size for acceleration. This limits its applicability. The introduction of the augmented Lagrangian multiplier method
[20] can solve this problem.
󰇛
󰇜
󰇛
󰇜
󰇛󰇜
󰇛󰇜
(6)
where is a positive scalar. When {
} is an increasing sequence and both f and h are continuous differentiable functions, the
Lagrangian multiplier
converges to the optimal solution in a Q-linear manner when
󰇝
󰇞
is bounded. Moreover, the optimal step
size to update Y
k
is proven to be the chosen penalty parameter
, making algorithm parameter adjustment easier than the IT
algorithm. The advantage of the ALM method is that even if
does not need to be close to infinity, it can converge to the optimal
solution.
󰇛

󰇜
 
(7)
Equation (7) is refer to as exact ALM (EALM) method. In fact, it is not the larger the
, the faster the EALM algorithm
converges. When
is larger, the EALM algorithm calculates an increase number of SVDs, and the convergence of the sub-
problem
󰇛


󰇜



󰇛

󰇜
will be slower.
Unlike EALM, the IALM algorithm does not need to find the exact solution of the above sub-problem exactly. Alternately, as
in (8) and (9), updating
and
once when solving this sub-problem is sufficient for
and
to meet the optimal convergence
condition of the PRCA problem [20].




󰇛


󰇜

󰇛


󰇜󰇛󰇜



󰇛

󰇜

󰇛


󰇜 (9)
where
󰇟
󰇠
is the soft-threshold contraction operator to modify each element in the matrix. It was introduced by the IT algorithm
and is used in the IALM algorithm.


󰇱







 
(10)
where

and .
It is worth noting that the soft threshold operator used in IALM is not efficient to handle the sparse matrix
. After the algorithm
converges, noise remains in
, which will affect subsequent defect detection processing. Therefore, sparse matrix noise
suppression is an important task in this work. To do that, assuming that possible defective parts are represented with noise in the
sparse matrix
, we mainly explore the problem of sparse matrix noise suppression. An improved IALM algorithm named
Weighted contraction IALM (WIALM) is introduced by introducing a new soft threshold operator



󰇱




 
(11)
where the matrix

is a weight matrix, which has the same dimension as
and
. It is used to reduce noise in
by
suppressing defect-free textures.
The matrix
relies on the singular value decomposition of the matrix, and uses the main singular values and their
corresponding eigenvectors as the principal component subspace of the matrix. For low rank tire images, the principal component
subspace contains background texture information with significant correlation, and shields the noise interference of the sparse
subspace. The construction of the matrix is derived from the
since the information of defect-free texture is included in the
low-rank matrix of each tire image block
󰇟
󰇠
(12)
where
is the absolute value of
,

(13)
where
󰇟

󰇠






(14)
where is the horizontal dimension of principal component matrix
of tire radiographic image block,

represents the
maximum value of the sum of the absolute values of the elements of each row of the matrix
. Because the texture direction of
the tire X-ray image is anisotropic and periodic, the texture gap can be extracted according to the threshold parameter set by the
gray value interval of the principal component matrix

.The texture gap of the tire image contains a large amount of
sparse noise. By increasing the corresponding weights of these elements, their gray values are accelerated to shrink in each iteration,
and they are suppressed in
. As shown in Fig. 1, the weighted part of the weight matrix is displayed in the original image in
red, and it is obvious that the weight distribution does not include the defect region. This shows that the improved soft threshold
operator proposed for the purpose of performing sparse matrix noise suppression has no interference with defect detection.
The weighted contraction IALM used in the tire defect inspection application is as follows.
(a) (b) (c) (d)
Figure. 1. (a), (b) test images; (c), (d) Distribution of weighting matrix M
for the corresponding test images.

References
More filters

Book ChapterDOI
05 Oct 2015-
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

28,273 citations


Posted Content
Shaoqing Ren1, Kaiming He2, Ross Girshick3, Jian Sun2Institutions (3)
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

23,121 citations


Posted Content
TL;DR: It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .

19,534 citations


"Automated Tire Visual Inspection Ba..." refers methods in this paper

  • ...Test image Ground Truth WTD[10] U-Net[33] Deeplabv3+[34] Faster RCNN[15] Our Method...

    [...]

  • ...We experiment on weighted-texture-dissimilarity (WTD) [10], U-Net [33], Deeplabv3+ [34] and Faster R-CNN [15]....

    [...]

  • ...Al Arif et al. [16] Proposed a fully automatic U-Net-based framework for segmenting the cervical spine in X-ray images and received satisfactory segmentation accuracy....

    [...]

  • ...In U-Net, features are extracted and fused by trained convolutional neural networks....

    [...]


Journal ArticleDOI
Shaoqing Ren1, Kaiming He2, Ross Girshick3, Jian Sun2Institutions (3)
TL;DR: This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.
Abstract: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

10,415 citations


Journal ArticleDOI
TL;DR: Quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures, including FCN and DeconvNet.
Abstract: We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/ .

8,450 citations


"Automated Tire Visual Inspection Ba..." refers methods in this paper

  • ...Secondly, we demonstrate that dual optimization in convergence speed and matrix sparsity can be solved efficiently and explicitly, outperforming that using conventional algorithms, by incorporating the improvement of the soft-threshold shrinkage operator by the weight matrix M. Finally, we present a semantic texture segmentation model based on SegNet [21] for image rank reduction....

    [...]

  • ...Therefore, we train a semantic tire texture segmentation network using transfer learning on our dataset using SegNet model [21]....

    [...]

  • ...Finally, we present a semantic texture segmentation model based on SegNet [21] for image rank reduction....

    [...]