scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors discuss what is possible in this ''noisy intermediate scale'' quantum (NISQ) era, including simulation of many-body physics and chemistry, combinatorial optimization, and machine learning.
Abstract: Noisy quantum computers can in principle perform reliable quantum computations, but truly scalable systems require noise levels lower than are presently achieved. Still, moderate-complexity computations can be performed. This review discusses what is possible in this ``noisy intermediate scale'' quantum (NISQ) era. Topic areas include the simulation of many-body physics and chemistry, combinatorial optimization, and machine learning. It is evident that the NISQ era has produced new paradigms for programming that will be built upon as quantum computers are further perfected.

316 citations


Journal ArticleDOI
Zhenbin Yang1
TL;DR: In this paper , the Page transition of an evaporating black hole from holographic computations of entanglement entropy has been obtained using the replica trick, from geometries with a spacetime wormhole connecting the different replicas.
Abstract: A bstract Recent work has shown how to obtain the Page curve of an evaporating black hole from holographic computations of entanglement entropy. We show how these computations can be justified using the replica trick, from geometries with a spacetime wormhole connecting the different replicas. In a simple model, we study the Page transition in detail by summing replica geometries with different topologies. We compute related quantities in less detail in more complicated models, including JT gravity coupled to conformal matter and the SYK model. Separately, we give a direct gravitational argument for entanglement wedge reconstruction using an explicit formula known as the Petz map; again, a spacetime wormhole plays an important role. We discuss an interpretation of the wormhole geometries as part of some ensemble average implicit in the gravity description.

143 citations


MonographDOI
12 Apr 2022
TL;DR: This chapter discusses Newton Methods for Nonlinear Optimization, Iterative Methods, and Applications of the Chebyshev Polynomials, which deals with the effects of Finite Precision Arithmetic.
Abstract: 1. Nonlinear Equations. Biscetion and Inverse Linear Interpolation. Newton's Method. The Fixed Point Theorem. Quadratic Convergence of Newton's Method. Variants of Newton's Method. Brent's Method. Effects of Finite Precision Arithmetic. Newton's Method for Systems. Broyden's Method. 2. Linear Systems. Gaussian Elimination with Partial Pivoting. The LU Decomposition. The LU Decomposition with Pivoting. The Cholesky Decomposition. Condition Numbers. The QR Decomposition. Householder Triangularization and the QR Decomposition. Gram-Schmidt Orthogonalization and the QR Decomposition. The Singular Value Decomposition. 3. Iterative Methods. Jacobi and Gauss-Seidel Iteration. Sparsity. Iterative Refinement. Preconditioning. Krylov Space Methods. Numerical Eigenproblems. 4. Polynomial Interpolation. Lagrange Interpolating Polynomials. Piecewise Linear Interpolation. Cubic Splines. Computation of the Cubic Spline Coefficients. 5. Numerical Integration. Closed Newton-Cotes Formulas. Open Newton-Cotes Formulas and Undetermined Coeffients. Gaussian Quadrature. Gauss-Chebyshev Quadrature. Radau and Lobatto Quadrature. Adaptivity and Automatic Integration. Romberg Integration. 6. Differential Equations. Numerical Differentiation. Euler's Method. Improved Euler's Method. Analysis of Explicit One-Step Methods. Taylor and Runge-Kutta Methods. Adaptivity and Stiffness. Multi-Step Methods. 7. Nonlinear Optimization. One-Dimensional Searches. The Method of Steepest Descent. Newton Methods for Nonlinear Optimization. Multiple Random Start Methods. Direct Search Methods. The Nelder-Mead Method. Conjugate Direction Methods. 8. Approximation Methods. Linear and Nonlinear Least Squares. The Best Approximation Problem. Best Uniform Approximation. Applications of the Chebyshev Polynomials. Afterword. Bibliography. Answers. Index.

117 citations


Journal ArticleDOI
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion by using the feature maps of all preceding layers as inputs, and its own feature-maps are used as inputs into all subsequent layers.
Abstract: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with $L$ layers have $L$ connections—one between each layer and its subsequent layer—our network has $\frac{L(L+1)}{2}$ direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, encourage feature reuse and substantially improve parameter efficiency. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less parameters and computation to achieve high performance.

85 citations


Proceedings ArticleDOI
01 Jun 2022
TL;DR: CSWin Transformer as discussed by the authors proposes a cross-shaved window self-attention mechanism for computing selfattention in the horizontal and vertical stripes in parallel, with each stripe obtained by splitting the input feature into stripes of equal width.
Abstract: We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas local self-attention often limits the field of interactions of each token. To address this issue, we develop the Cross-Shaped Window self-attention mechanism for computing self-attention in the horizontal and vertical stripes in parallel that form a cross-shaped window, with each stripe obtained by splitting the input feature into stripes of equal width. We provide a mathematical analysis of the effect of the stripe width and vary the stripe width for different layers of the Transformer network which achieves strong modeling capability while limiting the computation cost. We also introduce Locally-enhanced Positional Encoding (LePE), which handles the local positional information better than existing encoding schemes. LePE naturally supports arbitrary input resolutions, and is thus especially effective and friendly for downstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically, it achieves 85.4% Top-1 accuracy on ImageNet-1K without any extra training data or label, 53.9 box AP and 46.4 mask AP on the COCO detection task, and 52.2 mIOU on the ADE20K semantic segmentation task, surpassing previous state-of-the-art Swin Transformer backbone by +1.2, +2.0, +1.4, and +2.0 respectively under the similar FLOPs setting. By further pretraining on the larger dataset ImageNet-21K, we achieve 87.5% Top-1 accuracy on ImageNet-1K and high segmentation performance on ADE20K with 55.7 mIoU. 1 1 Code and pretrain model is available at https://github.com/microsoft/CSWin-Transformer

79 citations


Journal ArticleDOI
TL;DR: In this article , a generalized variable-coefficient Boiti-Leon-Pempinelli system describing the water waves in an infinitely narrow channel of constant depth is taken into consideration.

66 citations


Journal ArticleDOI
TL;DR: AMFlow as mentioned in this paper is a Mathematica package to numerically compute dimensionally regularized Feynman integrals via the recently proposed auxiliary mass flow method, which can be obtained by constructing and solving differential systems with respect to this parameter, in an automatic way.

65 citations


Journal ArticleDOI
TL;DR: In this article, a partially-coupled nonlinear parameter optimization algorithm is proposed for the multivariate hybrid models, which has low computational complexity and high parameter estimation accuracy through computational efficiency analysis and numerical simulation verification.

62 citations


Journal ArticleDOI
ZDSFCZSC1
TL;DR: In this paper , a partially-coupled nonlinear parameter optimization algorithm is proposed for the multivariate hybrid models, where the original identification model is separated into several regressive sub-identification models according to the characteristics of model outputs.

56 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new fault diagnosis method with the least square interactive support matrix machine (LSISMM) and infrared thermal images, which is directly used to analyze the collected thermal images of rotating machinery under time-varying speeds.
Abstract: The existing fault diagnosis methods of rotating machinery constructed with both shallow learning and deep learning models are mostly based on vibration analysis under steady rotating speed. However, the rotating speed frequently changes to meet practical engineering needs. The shallow learning models largely depend on domain experience of feature extraction, and training a deep learning model requires large samples and a long time. In addition, vibration monitoring has the shortcomings of contact measurement, small coverage, and noise interference. To address these problems, this article proposes a new fault diagnosis method with the least square interactive support matrix machine (LSISMM) and infrared thermal images. In this method, a novel matrix-form classifier called LSISMM is constructed under the concept of nonparallel interactive hyperplanes to fully leverage the structure information of infrared thermal images. To improve the computation efficiency, a new least square loss constraint is designed for LSISMM. Besides, we derive an effective solution framework based on the alternating direction method of the multiplier (ADMM) framework. The constructed LSISMM is directly used to analyze the collected thermal images of rotating machinery under time-varying speeds. Experiment results demonstrate that the proposed method is superior to state-of-the-art methods in terms of diagnosis accuracy and efficiency, especially under small thermal image samples.

55 citations


Journal ArticleDOI
TL;DR: In this article , the authors show that the dimensionality of the dynamics and subpopulation structure play complementary roles to shape the collective dynamics of a neural network, leading to task-specific predictions for the structure of neural selectivity and the implication of different neurons in multi-tasking.
Abstract: Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally complementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input-output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.

Journal ArticleDOI
01 Jan 2022-Energy
TL;DR: This work proposed a real-time dynamic optimal energy management (OEM) based on deep reinforcement learning (DRL) algorithm based on a novel policy-based DRL algorithm with continuous state and action spaces, which includes two phases: offline training and online operation.

Journal ArticleDOI
TL;DR: The main idea of SFE is to use symbolic dynamic filtering to remove the noise-related fluctuations while significantly simplifying the circulation calculation, thereby, generating better performance in resisting the background noises and high computation efficiency.

Journal ArticleDOI
TL;DR: In this article , a task offloading scheme was proposed by exploiting multi-hop vehicle computation resources in VEC based on mobility analysis of vehicles, where vehicles that meet the given requirements in terms of link connectivity and computation capacity were leveraged to carry out the tasks offloaded by the task vehicle.
Abstract: Vehicular Edge Computing (VEC) has gained increasing interest due to its potential to provide low latency and reduce the load in backhaul networks. In order to meet drastically increasing computation demands from emerging ever-growing vehicular applications, e.g., autonomous driving, abundant computation resources of individual vehicles can play a crucial role in task execution in a VEC scenario, that can further contribute in considerably improving user experience. This is however an extremely challenging task due to high mobility of vehicles that can easily lead to intermittent connectivity, thereby disrupting on-going task processing. In this paper, we propose a task offloading scheme by exploiting multi-hop vehicle computation resources in VEC based on mobility analysis of vehicles. In addition to the vehicles within one hop from the task vehicle that generates computation tasks, certain multi-hop vehicles that meet the given requirements in terms of link connectivity and computation capacity, are also leveraged to carry out the tasks offloaded by the task vehicle. An optimization problem is formulated for the task vehicle to minimize the weighted sum of execution time and computation cost of all tasks. A semidefinite relaxation approach with an adaptive adjustment procedure is proposed to solve the formulated optimization problem for obtaining the corresponding offloading decisions. The simulation results show that our proposed offloading scheme can achieve significant improvement in terms of response delay by at least 34% compared with the other algorithms (e.g., local processing and random offloading).

Journal ArticleDOI
TL;DR: In this paper , the Polarized Self-Attention (PSA) block was proposed to solve the pixel-wise mapping problem in fine-grained computer vision tasks, such as estimating keypoint heatmaps and segmentation masks.

Journal ArticleDOI
TL;DR: DNNOff as mentioned in this paper rewrites the source code to implement a special program structure supporting on-demand offloading and, at runtime, automatically determines the offloading scheme for a given DNN-based application.
Abstract: A deep neural network (DNN) has become increasingly popular in industrial Internet of Things scenarios. Due to high demands on computational capability, it is hard for DNN-based applications to directly run on intelligent end devices with limited resources. Computation offloading technology offers a feasible solution by offloading some computation-intensive tasks to the cloud or edges. Supporting such capability is not easy due to two aspects: Adaptability: offloading should dynamically occur among computation nodes. Effectiveness: it needs to be determined which parts are worth offloading. This article proposes a novel approach, called DNNOff. For a given DNN-based application, DNNOff first rewrites the source code to implement a special program structure supporting on-demand offloading and, at runtime, automatically determines the offloading scheme. We evaluated DNNOff on a real-world intelligent application, with three DNN models. Our results show that, compared with other approaches, DNNOff saves response time by 12.4–66.6% on average.

Journal ArticleDOI
TL;DR: This paper proposes a secure communication scheme for the NOMA-based UAV-MEC system towards a flying eavesdropper and results show that the proposed scheme is superior to the benchmarks in terms of the system security computation performance.
Abstract: Non-orthogonal multiple access (NOMA) allows multiple users to share link resource for higher spectrum efficiency. It can be applied to unmanned aerial vehicle (UAV) and mobile edge computing (MEC) networks to provide convenient offloading computing service for ground users (GUs) with large-scale access. However, due to the line-of-sight (LoS) of UAV transmission, the information can be easily eavesdropped in NOMA-based UAV-MEC networks. In this paper, we propose a secure communication scheme for the NOMA-based UAV-MEC system towards a flying eavesdropper. In the proposed scheme, the average security computation capacity of the system is maximized while guaranteeing a minimum security computation requirement for each GU. Due to the uncertainty of the eavesdropper’s position, the coupling of multi-variables and the non-convexity of the problem, we first study the worst security situation through mathematical derivation. Then, the problem is solved by utilizing successive convex approximation (SCA) and block coordinate descent (BCD) methods with respect to channel coefficient, transmit power, central processing unit (CPU) computation frequency, local computation and UAV trajectory. Simulation results show that the proposed scheme is superior to the benchmarks in terms of the system security computation performance.

Journal ArticleDOI
TL;DR: This article focuses on the parameter estimation issues for a fractional‐order nonlinear system with autoregressive noise and proposes a two‐stage moving‐data‐window gradient‐based iterative algorithm to reduce the complexity and improve the identification accuracy.
Abstract: This article focuses on the parameter estimation issues for a fractional‐order nonlinear system with autoregressive noise. In the process, the challenge and difficulty are to identify the parameters of the system as well as the order. To reduce the complexity of the structure, we split the system into two subsystems by utilizing the hierarchical identification principle and derive a two‐stage gradient‐based iterative (2S‐GI) algorithm by minimizing two criterion functions. Compared with the calculation amount of the gradient‐based iterative algorithm, the computation of the 2S‐GI algorithm is significantly reduced. Moreover, in order to improve the identification accuracy, we propose a two‐stage moving‐data‐window gradient‐based iterative algorithm. Finally, the simulation examples test the effectiveness of the proposed algorithms.

Journal ArticleDOI
TL;DR: A detailed analytical computation of support characteristic curve (SCC) for circumferential yielding lining, which is a significant aspect of the implementation of convergence-confinement method (CCM) in tunnel support design, is presented in this article .
Abstract: Circumferential yielding lining is able to tolerate controlled displacements without failure, which has been proven to be an effective solution to large deformation problem in squeezing tunnels. However, up to now, there has not been a well-established design method for it. This paper aims to present a detailed analytical computation of support characteristic curve (SCC) for circumferential yielding lining, which is a significant aspect of the implementation of convergence-confinement method (CCM) in tunnel support design. Circumferential yielding lining consists of segmental shotcrete linings and highly deformable elements, and its superior performance mainly depends on the mechanical characteristic of highly deformable element. The deformation behavior of highly deformable element is firstly investigated. Its whole deforming process can be divided into three stages including elastic, yielding and compaction stages. Especially in the compaction stage of highly deformable element, a nonlinear stress–strain relationship can be observed. For mathematical convenience, the stress–strain curve in this period is processed as several linear sub-curves. Then, the reasons for closure of circumferential yielding lining in different stages are explained, and the corresponding accurate equations required for constructing the SCC are provided. Furthermore, this paper carries out two case studies illustrating the application of all equations needed to construct the SCC for circumferential yielding lining, where the reliability and feasibility of theoretical derivation are also well verified. Finally, this paper discusses the sensitivity of sub-division in element compaction stage and the influence of element length on SCC. The outcome of this paper could be used in the design of proper circumferential yielding lining.

Journal ArticleDOI
TL;DR: A comprehensive survey of sampling methods for efficient training of GCN can be found in this paper, where the authors categorize sampling methods based on the sampling mechanisms and provide a comprehensive comparison within each category.
Abstract: Graph convolutional networks (GCNs) have received significant attention from various research fields due to the excellent performance in learning graph representations. Although GCN performs well compared with other methods, it still faces challenges. Training a GCN model for large-scale graphs in a conventional way requires high computation and storage costs. Therefore, motivated by an urgent need in terms of efficiency and scalability in training GCN, sampling methods have been proposed and achieved a significant effect. In this paper, we categorize sampling methods based on the sampling mechanisms and provide a comprehensive survey of sampling methods for efficient training of GCN. To highlight the characteristics and differences of sampling methods, we present a detailed comparison within each category and further give an overall comparative analysis for the sampling methods in all categories. Finally, we discuss some challenges and future research directions of the sampling methods.

Journal ArticleDOI
01 Jan 2022-Energy
TL;DR: In this paper , a real-time dynamic optimal energy management (OEM) based on deep reinforcement learning (DRL) algorithm is proposed to help the EMS make optimal schedule decisions, and the case study demonstrates the effectiveness and the computation efficiency of the proposed method.

Journal ArticleDOI
TL;DR: In this article , a modification of the Sardar sub-equation method is discussed and employed to retrieve solitons and other solutions of the suggested nonlinear model, including bright and dark solutions.

Journal ArticleDOI
TL;DR: In this article , a feed forward neural networks based soft sensors were designed to accurately predict distance to empty (DTE) in a Ford Escape EV using actual drive cycle data and rated DTE using Levenberg Marquardt, Bayesian Regularization and Scaled Conjugate Gradient algorithms.
Abstract: Electric vehicle (EV) drivers require reliable distance to empty (DTE) indication to plan their trips. In the current study, feed forward neural networks based soft sensors were designed to accurately predict DTE in a Ford Escape EV. The proposed DTE soft sensors were trained on actual drive cycle data and rated DTE using Levenberg Marquardt, Bayesian Regularization and Scaled Conjugate Gradient algorithms. Regression models were also developed for comparisons. Primary results show that the Bayesian Regularization trained soft sensor network with eleven hidden layer neurons achieved the highest testing accuracy (99.64%) among the two layered networks, followed by the Levenberg Marquardt (two layered, eleven hidden layer neurons, testing accuracy 99.62%) and Scaled Conjugate Gradient trained networks (two layered, seven hidden layer neurons, testing accuracy 99.49%). The linear and non linear regression models attained 96.19% and 97.53% accuracies respectively. Deeper soft sensor networks yielded better prediction accuracies at higher computation times. The five layered Bayesian Regularization trained network (with ten neurons in each hidden layer) maximized DTE prediction accuracy to 99.89%, but at the cost of 1175% more training time as compared to the best performing two layered network soft sensor. An optimal choice of prediction accuracy considering reasonable computation timescales can help reduce range anxiety of EV users significantly.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a symbolic fuzzy entropy (SFE) based on symbolic dynamic filtering and fuzzy entropy to eliminate the noises and improve the calculation efficiency, which achieved the best performance in weak fault characteristics compared with three existing MSE, MFE, and MPE methods.

Journal ArticleDOI
TL;DR: In this article , a power converter with a virtual MPC controller is first designed and operated under a circuit simulation or power hardware-in-the-loop simulation environment, and an artificial neural network (ANN) is then trained offline with the input and output data of the VMC controller.
Abstract: There has been an increasing interest in using model predictive control (MPC) for power electronic applications. However, the exponential increase in computational complexity and demand of computing resources hinders the practical adoption of this highly promising control technique. In this article, a new MPC approach using an artificial neural network (termed ANN-MPC) is proposed to overcome these barriers. A power converter with a virtual MPC controller is first designed and operated under a circuit simulation or power hardware-in-the-loop simulation environment. An artificial neural network (ANN) is then trained offline with the input and output data of the virtual MPC controller. Next, an actual FPGA-based MPC controller is designed using the trained ANN instead of relying on heavy-duty mathematical computation to control the actual operation of the power converter in real time. The ANN-MPC approach can significantly reduce the computing need and allow the use of more accurate high-order system models due to the simple mathematical expression of ANN. Furthermore, the ANN-MPC approach can retain the robustness for system parameter uncertainties by flexibly setting the input elements. The basic concept, ANN structure, offline training method, and online operation of ANN-MPC are described in detail. The computing resource requirement of the ANN-MPC and conventional MPC are analyzed and compared. The ANN-MPC concept is validated by both simulation and experimental results on two kW-class flying capacitor multilevel converters. It is demonstrated that the FPGA-based ANN-MPC controller can significantly reduce the FPGA resource requirement (e.g., 2.11 times fewer slice LUTs and 2.06 times fewer DSPs) while offering a control performance same as the conventional MPC.

Journal ArticleDOI
TL;DR: In this article , the essence of various computer-generated hologram algorithms and provide some insights for future research are discussed and a review of color dynamic holographic three-dimensional display algorithms are presented.
Abstract: Holographic three-dimensional display is an important display technique because it can provide all depth information of a real or virtual scene without any special eyewear. In recent years, with the development of computer and optoelectronic technology, computer-generated holograms have attracted extensive attention and developed as the most promising method to realize holographic display. However, some bottlenecks still restrict the development of computer-generated holograms, such as heavy computation burden, low image quality, and the complicated system of color holographic display. To overcome these problems, numerous algorithms have been investigated with the aim of color dynamic holographic three-dimensional display. In this review, we will explain the essence of various computer-generated hologram algorithms and provide some insights for future research.

Journal ArticleDOI
TL;DR: To improve the efficiency of deep learning research, this review focuses on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems.
Abstract: Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require substantial computation and memory resources while replacing traditional hand-engineered features in existing systems. In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems. Recent advances in light-weight deep learning models and network architecture search (NAS) algorithms are reviewed, starting with simplified layers and efficient convolution and including new architectural design and optimization. In addition, several practical applications of efficient CNNs have been investigated using various types of hardware architectures and platforms.

Journal ArticleDOI
TL;DR: In this article , a review of state-of-the-art deep learning-empowered computational spectral imaging methods is presented, which is further divided into amplitude-coded, phase-coded and wavelength-coded methods, based on different light properties used for encoding.
Abstract: Abstract The goal of spectral imaging is to capture the spectral signature of a target. Traditional scanning method for spectral imaging suffers from large system volume and low image acquisition speed for large scenes. In contrast, computational spectral imaging methods have resorted to computation power for reduced system volume, but still endure long computation time for iterative spectral reconstructions. Recently, deep learning techniques are introduced into computational spectral imaging, witnessing fast reconstruction speed, great reconstruction quality, and the potential to drastically reduce the system volume. In this article, we review state-of-the-art deep-learning-empowered computational spectral imaging methods. They are further divided into amplitude-coded, phase-coded, and wavelength-coded methods, based on different light properties used for encoding. To boost future researches, we’ve also organized publicly available spectral datasets.


Journal ArticleDOI
TL;DR: Gao et al. as mentioned in this paper investigated an extended coupled (2+1)-dimensional Burgers system in oceanography, acoustics and hydrodynamics, and constructed two sets of similarity reductions with symbolic computation.