scispace - formally typeset
Search or ask a question

Showing papers on "Binary tree published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, an attention convolutional binary neural tree architecture is presented to address the problems for weakly supervised fine-grained visual categorization (FGVC) is an important but challenging task due to high intra-class variances and low inter-class variance caused by deformation, occlusion, illumination, etc.
Abstract: Fine-grained visual categorization (FGVC) is an important but challenging task due to high intra-class variances and low inter-class variances caused by deformation, occlusion, illumination, etc. An attention convolutional binary neural tree architecture is presented to address those problems for weakly supervised FGVC. Specifically, we incorporate convolutional operations along edges of the tree structure, and use the routing functions in each node to determine the root-to-leaf computational paths within the tree. The final decision is computed as the summation of the predictions from leaf nodes. The deep convolutional operations learn to capture the representations of objects, and the tree structure characterizes the coarse-to-fine hierarchical feature learning process. In addition, we use the attention transformer module to enforce the network to capture discriminative features. The negative log-likelihood loss is used to train the entire network in an end-to-end fashion by SGD with back-propagation. Several experiments on the CUB-200-2011, Stanford Cars and Aircraft datasets demonstrate that the proposed method performs favorably against the state-of-the-arts.

88 citations


Journal ArticleDOI
TL;DR: A natural language grounding model that can automatically compose a binary tree structure for parsing the language and then perform visual reasoning along the tree in a bottom-up fashion and achieves the state-of-the-art performance with more explainable reasoning is proposed.
Abstract: Grounding natural language in images, such as localizing "the black dog on the left of the tree", is one of the core problems in artificial intelligence, as it needs to comprehend the fine-grained and compositional language space. However, existing solutions rely on the association between the holistic language features and visual features, while neglect the nature of compositional reasoning implied in the language. In this paper, we propose a natural language grounding model that can automatically compose a binary tree structure for parsing the language and then perform visual reasoning along the tree in a bottom-up fashion. We call our model RVG-TREE: Recursive Grounding Tree, which is inspired by the intuition that any language expression can be recursively decomposed into two constituent parts, and the grounding confidence score can be recursively accumulated by calculating their grounding scores returned by sub-trees. RVG-TREE can be trained end-to-end by using the Straight-Through Gumbel-Softmax estimator that allows the gradients from the continuous score functions passing through the discrete tree construction. Experiments on several benchmarks show that our model achieves the state-of-the-art performance with more explainable reasoning.

88 citations


Journal ArticleDOI
TL;DR: The proposed unsupervised keyphrase extraction technique, named TeKET or Tree-based Keyphrase Extraction Technique, is a domain-independent technique that employs limited statistical knowledge and requires no train data.
Abstract: Automatic keyphrase extraction techniques aim to extract quality keyphrases for higher level summarization of a document. Majority of the existing techniques are mainly domain-specific, which require application domain knowledge and employ higher order statistical methods, and computationally expensive and require large train data, which is rare for many applications. Overcoming these issues, this paper proposes a new unsupervised keyphrase extraction technique. The proposed unsupervised keyphrase extraction technique, named TeKET or Tree-based Keyphrase Extraction Technique, is a domain-independent technique that employs limited statistical knowledge and requires no train data. This technique also introduces a new variant of a binary tree, called KeyPhrase Extraction (KePhEx) tree, to extract final keyphrases from candidate keyphrases. In addition, a measure, called Cohesiveness Index or CI, is derived which denotes a given node’s degree of cohesiveness with respect to the root. The CI is used in flexibly extracting final keyphrases from the KePhEx tree and is co-utilized in the ranking process. The effectiveness of the proposed technique and its domain and language independence are experimentally evaluated using available benchmark corpora, namely SemEval-2010 (a scientific articles dataset), Theses100 (a thesis dataset), and a German Research Article dataset, respectively. The acquired results are compared with other relevant unsupervised techniques belonging to both statistical and graph-based techniques. The obtained results demonstrate the improved performance of the proposed technique over other compared techniques in terms of precision, recall, and F1 scores.

82 citations


Posted Content
TL;DR: The Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition that combines prototype learning with decision trees, and thus results in a globally interpretable model by design.
Abstract: Prototype-based methods use interpretable representations to address the black-box nature of deep learning models, in contrast to post-hoc explanation methods that only approximate such models. We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition. ProtoTree combines prototype learning with decision trees, and thus results in a globally interpretable model by design. Additionally, ProtoTree can locally explain a single prediction by outlining a decision path through the tree. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this learned prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird! We tune the accuracy-interpretability trade-off using ensemble methods, pruning and binarizing. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 learned prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200- 2011 and Stanford Cars data sets. Code is available at this https URL

74 citations


Journal ArticleDOI
TL;DR: A lightweight and tunable QTBT partitioning scheme based on a Machine Learning (ML) approach that uses Random Forest classifiers to determine for each coding block the most probable partition modes and to minimize the encoding loss induced by misclassification is proposed.
Abstract: Block partition structure is a critical module in video coding scheme to achieve significant gap of compression performance. Under the exploration of the future video coding standard, named Versatile Video Coding (VVC), a new Quad Tree Binary Tree (QTBT) block partition structure has been introduced. In addition to the QT block partitioning defined in High Efficiency Video Coding (HEVC) standard, new horizontal and vertical BT partitions are enabled, which drastically increases the encoding time compared to HEVC. In this paper, we propose a lightweight and tunable QTBT partitioning scheme based on a Machine Learning (ML) approach. The proposed solution uses Random Forest classifiers to determine for each coding block the most probable partition modes. To minimize the encoding loss induced by misclassification, risk intervals for classifier decisions are introduced in the proposed solution. By varying the size of risk intervals, tunable trade-off between encoding complexity reduction and coding loss is achieved. The proposed solution implemented in the JEM-7.0 software offers encoding complexity reductions ranging from 30% to 70% in average for only 0.7% to 3.0% Bjontegaard Delta Rate (BD-BR) increase in Random Access (RA) coding configuration, with very slight overhead induced by Random Forest. The proposed solution based on Random Forest classifiers is also efficient to reduce the complexity of the Multi-Type Tree (MTT) partitioning scheme under the VTM-5.0 software, with complexity reductions ranging from 25% to 61% in average for only 0.4% to 2.2% BD-BR increase.

72 citations


Journal ArticleDOI
TL;DR: A fast coding unit (CU) partition and intra mode decision algorithm, which includes fast CU partition based on random forest classifier (RFC) model and fast intra prediction modes optimization based on texture region features is designed.
Abstract: The Versatile Video Coding (H266/VVC) standard has developed by Joint Video Exploration Team (JVET) Compared with the previous generation video coding standard, the H266/VVC is more outstanding Since the H266/VVC introduces multi-type tree (MTT) structure including binary tree (BT) and ternary tree (TT), which brings the significant coding efficiency but increases coding complexity Moreover, the intra prediction modes have increased from 35 to 67, which can provide more accurate prediction than H265/High Efficiency Video Coding (HEVC) Therefore, these can improve the encoding quality, but increase computational complexity To reduce the computational complexity, this paper designs a fast coding unit (CU) partition and intra mode decision algorithm, which includes fast CU partition based on random forest classifier (RFC) model and fast intra prediction modes optimization based on texture region features Simulation results indicate that the proposed scheme can save 5491% encoding time with only 093% increase in BDBR

63 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed an improved reversible data hiding scheme in encrypted images using parametric binary tree labeling (IPBTL-RDHEI), which takes advantage of the spatial correlation in the entire original image but not in small image blocks to reserve room for hiding data.
Abstract: This work proposes an improved reversible data hiding scheme in encrypted images using parametric binary tree labeling(IPBTL-RDHEI), which takes advantage of the spatial correlation in the entire original image but not in small image blocks to reserve room for hiding data. Then the original image is encrypted with an encryption key and the parametric binary tree is used to label encrypted pixels into two different categories. Finally, one of the two categories of encrypted pixels can embed secret information by bit replacement. According to the experimental results, compared with several state-of-the-art methods, the proposed IPBTL-RDHEI method achieves higher embedding rate and outperforms the competitors. Due to the reversibility of IPBTL-RDHEI, the original plaintext image and the secret information can be restored and extracted losslessly and separately.

60 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this paper, the geometry of a 3D object is reconstructed as a set of primitives and their latent hierarchical structure without part-level supervision, where simple parts are represented with fewer primitives, and more complex parts are modeled with more components.
Abstract: Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

51 citations


Posted Content
TL;DR: In this paper, the authors provide a continuous relaxation of Dasgupta's discrete optimization problem with provable quality guarantees by showing a direct correspondence from discrete trees to continuous representations via the hyperbolic embeddings of their leaf nodes.
Abstract: Similarity-based Hierarchical Clustering (HC) is a classical unsupervised machine learning algorithm that has traditionally been solved with heuristic algorithms like Average-Linkage. Recently, Dasgupta reframed HC as a discrete optimization problem by introducing a global cost function measuring the quality of a given tree. In this work, we provide the first continuous relaxation of Dasgupta's discrete optimization problem with provable quality guarantees. The key idea of our method, HypHC, is showing a direct correspondence from discrete trees to continuous representations (via the hyperbolic embeddings of their leaf nodes) and back (via a decoding algorithm that maps leaf embeddings to a dendrogram), allowing us to search the space of discrete binary trees with continuous optimization. Building on analogies between trees and hyperbolic space, we derive a continuous analogue for the notion of lowest common ancestor, which leads to a continuous relaxation of Dasgupta's discrete objective. We can show that after decoding, the global minimizer of our continuous relaxation yields a discrete tree with a (1 + epsilon)-factor approximation for Dasgupta's optimal tree, where epsilon can be made arbitrarily small and controls optimization challenges. We experimentally evaluate HypHC on a variety of HC benchmarks and find that even approximate solutions found with gradient descent have superior clustering quality than agglomerative heuristics or other gradient based algorithms. Finally, we highlight the flexibility of HypHC using end-to-end training in a downstream classification task.

42 citations


Posted Content
TL;DR: This work proposes a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision, and recovers the higher level structural decomposition of various objects in the form of a binary tree ofPrimitives.
Abstract: Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

39 citations


Proceedings ArticleDOI
01 Nov 2020
TL;DR: S-DIORA, an improved variant of DIORA that encodes a single tree rather than a softly-weighted mixture of trees by employing a hard argmax operation and a beam at each cell in the chart, is introduced.
Abstract: The deep inside-outside recursive autoencoder (DIORA; Drozdov et al. 2019) is a self-supervised neural model that learns to induce syntactic tree structures for input sentences *without access to labeled training data*. In this paper, we discover that while DIORA exhaustively encodes all possible binary trees of a sentence with a soft dynamic program, its vector averaging approach is locally greedy and cannot recover from errors when computing the highest scoring parse tree in bottom-up chart parsing. To fix this issue, we introduce S-DIORA, an improved variant of DIORA that encodes a single tree rather than a softly-weighted mixture of trees by employing a hard argmax operation and a beam at each cell in the chart. Our experiments show that through *fine-tuning* a pre-trained DIORA with our new algorithm, we improve the state of the art in *unsupervised* constituency parsing on the English WSJ Penn Treebank by 2.2-6% F1, depending on the data used for fine-tuning.

Proceedings Article
05 Jan 2020
TL;DR: A general and versatile algorithmic framework for exhaustively generating a large variety of different combinatorial objects, based on encoding them as permutations, which provides a unified view on many known results and allows us to prove many new ones.
Abstract: In this work we present a general and versatile algorithmic framework for exhaustively generating a large variety of different combinatorial objects, based on encoding them as permutations. This approach provides a unified view on many known results and allows us to prove many new ones. In particular, we obtain the following four classical Gray codes as special cases: the Steinhaus-Johnson-Trotter algorithm to generate all permutations of an n-element set by adjacent transpositions; the binary reflected Gray code to generate all n-bit strings by flipping a single bit in each step; the Gray code for generating all n-vertex binary trees by rotations due to Lucas, van Baronaigien, and Ruskey; the Gray code for generating all partitions of an n-element ground set by element exchanges due to Kaye. We present two distinct applications for our new framework: The first main application is the generation of pattern-avoiding permutations, yielding new Gray codes for different families of permutations that are characterized by the avoidance of certain classical patterns, (bi)vincular patterns, barred patterns, Bruhat-restricted patterns, mesh patterns, monotone and geometric grid classes, and many others. We thus also obtain new Gray code algorithms for the combinatorial objects that are in bijection to these permutations, in particular for five different types of geometric rectangulations, also known as floorplans, which are divisions of a square into n rectangles subject to certain restrictions. The second main application of our framework are lattice congruences of the weak order on the symmetric group Sn. Recently, Pilaud and Santos realized all those lattice congruences as (n − 1)-dimensional polytopes, called quotientopes, which generalize hypercubes, associahedra, permutahedra etc. Our algorithm generates the equivalence classes of each of those lattice congruences, by producing a Hamilton path on the skeleton of the corresponding quotientope, yielding a constructive proof that each of these highly symmetric graphs is Hamiltonian. We thus also obtain a provable notion of optimality for the Gray codes obtained from our framework: They translate into walks along the edges of a polytope.

Journal ArticleDOI
Hong Zhong1, Li Zhanfei1, Jie Cui1, Yue Sun1, Lu Liu2 
TL;DR: This paper proposes an efficient dynamic multi-keyword fuzzy search scheme for encrypted cloud data to support dynamic file updates and demonstrates that this scheme is more efficient than existing similar schemes.

Journal ArticleDOI
TL;DR: A novel non-split condition with an easy-setting hyperparameter which focuses more on minority classes of the current node is proposed and applied in the BDTKS model, avoiding ignoring the minority classes in the class imbalance cases and speeding up the process of classification.

Journal ArticleDOI
TL;DR: In this article, a hierarchical decomposition of the approximation space obtained by splitting the independent variables of the problem into disjoint subsets is proposed, which can be conveniently visualized in terms of binary trees, yielding series expansions analogous to the classical Tensor-Train and Hierarchical Tucker tensor formats.

Journal ArticleDOI
01 May 2020
TL;DR: The proposed work Hybrid Context Aware Recommendation System for E-Health Care (HCARS-EHC) is implemented, and the implementation results illustrate that the protocol is efficient based on the privacy preservation, recommendation and ranking with less computation and communication complexity.
Abstract: Privacy preservation permits doctors to outsource the huge encrypted reports to the cloud and permits the authenticated patients to have a safe search over the reports without leaking the private information. The doctors in our proposed have used the merkle hash tree for storing the reports of all the patients in the hospital. The existing schemes have used many types of trees like binary tree, red–black tree, spanning tree, B+ tree, etc., for the index generation purpose. Since the security is less and the searching time is high for the above said trees, we have proposed the index generation phase based on the merkle hash tree based on the evolutionary algorithm and it takes less time for searching and highly secure for storing the patient reports. The evolutionary algorithm is used for breeding the new data’s through crossover as well as mutation operations to give confinement to new children. When the patient submits the search request for specialized doctor, based on the patient disease our protocol will recommend the specialized doctors and send the recommended doctors information to the patients who have the highest rating in the online social networks. After receiving the recommended results, the patient can have the treatment via online booking appointment, video call or in person based on the appointment booked. After completely cured, the patients can rate the doctors based on the medicine satisfaction, doctors’ fees and doctor’s response over the call. In this mechanism, we have used the hybrid context aware recommendation system collaborative filtering for rating the doctors based on their performance. After rating the doctors, our protocol has measured the accuracy based on the predicted rating and the true rating. This kind of accuracy metrics is used for ranking the good doctors in the top rank for the patient use. Our proposed work Hybrid Context Aware Recommendation System for E-Health Care (HCARS-EHC) is implemented, and the implementation results of HCARS-EHC illustrate that our protocol is efficient based on the privacy preservation, recommendation and ranking with less computation and communication complexity.

Journal ArticleDOI
TL;DR: The methodology was proven to be successful in identifying the correct physical models describing the relationship between shear stress and shear rate for both Newtonian and non-Newtonian fluids, and simple kinetic laws of chemical reactions.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, the authors propose a systematic framework to embed the network topology into a hierarchical binary virtual-identity space that is particularly amenable to multi-path routing.
Abstract: In recent years, the effort of promoting versatile, easy to manage routing schemes, as a replacement to OSPF has gathered momentum particularly in the context of large-scale enterprise networks, data center networks and software-defined wide area networks (SD-WANs). Such routing schemes rely on embedding the network into a geometric/topological space (e.g. a binary tree) to facilitate multi-path routing with reduced state maintenance and quick recovery in localized failure scenarios. In this work, we propose a systematic framework to embed the network topology into a hierarchical binary virtual-identityspace that is particularly amenable to multi-path routing. Our methodology firstly involves a relaxed form of the connected graph bi-partitioning problem that exploits a geometric embedding of the network in an n-dimensional Euclidean space (n being the number of hosts in the network) based on the Moore-Penrose pseudo inverse of the Laplacian for the graph associated with the network. The edges of the network are mapped to a weight distribution that helps construct a spanning tree from the core of the network towards the periphery, thereby providing a point of symmetry in the network to facilitate balanced bipartitions. This, in turn, yields a (nearly) full balanced binary tree embedding of the network and consequently a good virtual-id space. We also explore the binary identity assignment problem in another point of view by using bi-connected graph as the input graph to introduce a recursive bipartition algorithm. Through rigorous theoretical analysis and experimentation, we demonstrate that our methods perform well within reasonable bounds of computational complexity.

Journal ArticleDOI
TL;DR: A fast Coding Unit (CU) size decision method for intra prediction of V VC is proposed, which can significantly reduce the calculation in the intra encoding of VVC.
Abstract: The versatile video coding (VVC) is the latest video coding standard, which uses a Multi-Type Tree (MTT) coding structure. Compared with existing video coding standards, this structure can flexibly split coding blocks according to the complex texture features of the image. As MTT structure introduces binary tree (BT) and ternary tree (TT) splitting, it will lead to a sharp increase in computational complexity. In the paper, a fast Coding Unit (CU) size decision method for intra prediction of VVC is proposed, which can significantly reduce the calculation in the intra encoding of VVC. The proposed method consists of two steps: 1) determine whether CU is divided and 2) select the best CU splitting mode. In the intra prediction process, the CU texture complexity is firstly calculated, which judges whether the CU is divided into sub-CUs. Then, the unnecessary splitting mode candidates are discarded according to the relationship between the texture direction and the CU splitting mode. The experimental results show that our proposed fast CU partition method reduces about 48.58% the computational complexity, while the BDBR only increases by 0.91%.

Journal ArticleDOI
TL;DR: The experiments show that DiFF-RF almost systematically outperforms the IF algorithm and one of its extended variant, but also challenges the one-class SVM baseline, deep learning variational auto-encoder and ensemble of auto- encoder architectures.
Abstract: In this paper, we propose DiFF-RF, an ensemble approach composed of random partitioning binary trees to detect point-wise and collective (as well as contextual) anomalies. Thanks to a distance-based paradigm used at the leaves of the trees, this semi-supervised approach solves a drawback that has been identified in the isolation forest (IF) algorithm. Moreover, taking into account the frequencies of visits in the leaves of the random trees allows to significantly improve the performance of DiFF-RF when considering the presence of collective anomalies. DiFF-RF is fairly easy to train, and excellent performance can be obtained by using a simple semi-supervised procedure to setup the extra hyper-parameter that is introduced. We first evaluate DiFF-RF on a synthetic data set to i) verify that the limitation of the IF algorithm is overcome, ii) demonstrate how collective anomalies are actually detected and iii) to analyze the effect of the meta-parameters it involves. We assess the DiFF-RF algorithm on a large set of datasets from the UCI repository, as well as two benchmarks related to intrusion detection applications. Our experiments show that DiFF-RF almost systematically outperforms the IF algorithm, but also challenges the one-class SVM baseline and a deep learning variational auto-encoder architecture. Furthermore, our experience shows that DiFF-RF can work well in the presence of small-scale learning data, which is conversely difficult for deep neural architectures. Finally, DiFF-RF is computationally efficient and can be easily parallelized on multi-core architectures.

Journal ArticleDOI
TL;DR: This article proposes a continuous object boundary tracking algorithm for IWSNs – which fully exploits the collective intelligence and machine learning capability within the sensor nodes.
Abstract: Due to the flammability, explosibility and toxicity of continuous objects (e.g. chemical gas, oil spill, radioactive waste) in petrochemical and nuclear industry, boundary tracking of continuous objects is becoming a critical issue for industrial wireless sensor networks (IWSNs) to achieve disaster reduction. In this article, we fully exploit the collective intelligence and machine learning capability of senor nodes, and propose a continuous objects boundary tracking algorithm for IWSNs. The proposed algorithm first determines an upper bound of the event region covered by continuous objects. A full binary tree-based partition is performed within the event region for coarse-grained boundary area mapping. To study the irregularity of continuous objects in detail, the boundary tracking problem is further transformed into a binary classification problem. A hierarchical soft margin support vector machines training strategy is designed to address the binary classification problem in a distributed fashion. Simulation results demonstrate that the proposed algorithm achieves high tracking accuracy with less boundary nodes. Without additional fault-tolerant mechanisms, the proposed algorithm is inherently robust to the false sensor readings reported by low proportion of faulty nodes, which is expected in harsh industrial environments.

Journal ArticleDOI
23 Mar 2020
TL;DR: A novel fault interpretation method based on image features and binary tree support vector machine (SVM) is proposed, which can get the condition of three windings in one measurement and has high accuracy for identifying fault type and faulty winding in AT.
Abstract: Autotransformer (AT) is the most core power supply equipment, and overvoltage and short circuit (SC) fault may lead to winding deformation, which will have a negative impact on its insulation and even affect the operation of a train. The frequency response analysis (FRA) is widely used for detecting winding faults in a transformer. However, the direct measure of FRA for each split winding fails because the split windings are adopted to satisfy the impedance requirement of a high-speed railway, where the windings are connected inside the tank. A novel fault interpretation method based on image features and binary tree support vector machine (SVM) is proposed, which can get the condition of three windings in one measurement. Winding faults caused by different windings are simulated, including SC defect, axial deformation, and series capacitance variation, and the FRA curves are measured under various faults. Then, the features of the gray-level gradient co-occurrence matrix and the gray-level difference statistics are got from the polar plot of FRA. Finally, the image features are used as the inputs to the binary tree SVM for fault type and faulty winding classification. The results show that the proposed method has high accuracy for identifying fault type and faulty winding in AT.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a forward and backward private dynamic searchable symmetric encryption (DSSE) scheme by using a refined binary tree data structure and extensive security analysis and extensive experiments demonstrate that their proposal is secure and efficient.
Abstract: Due to its capabilities of searches and updates over the encrypted database, the dynamic searchable symmetric encryption (DSSE) has received considerable attention recently. To resist leakage abuse attacks, a secure DSSE scheme usually requires forward and backward privacy. However, the existing forward and backward private DSSE schemes either only support single keyword queries or require more interactions between the client and the server. In this paper, we first give a new leakage function for range queries, which is more complicated than the one for single keyword queries. Furthermore, we propose a concrete forward and backward private DSSE scheme by using a refined binary tree data structure. Finally, the detailed security analysis and extensive experiments demonstrate that our proposal is secure and efficient, respectively.

Journal ArticleDOI
TL;DR: This paper builds a model of the binary tree to stitch each two matched images without sorting and estimates the overlapping areas between input images so that the extraction and matching of feature points are only performed in these areas.
Abstract: Aiming at the complex computation and time-consuming problem during unordered image stitching, we present a method based on the binary tree and the estimated overlapping areas to stitch images without order in this paper. For image registration, the overlapping areas between input images are estimated, so that the extraction and matching of feature points are only performed in these areas. For image stitching, we build a model of the binary tree to stitch each two matched images without sorting. Compared to traditional methods, our method significantly reduces the computational time of matching irrelevant image pairs and improves the efficiency of image registration and stitching. Moreover, the stitching model of the binary tree proposed in this paper further reduces the distortion of the panorama. Experimental results show that the number of extracted feature points in the estimated overlapping area is approximately 0.3~0.6 times of that in the entire image by using the same method, which greatly reduces the computational time of feature extraction and matching. Compared to the exhaustive image matching method, our approach only takes about 1/3 of the time to find all matching images.

Journal ArticleDOI
TL;DR: The objectives of this paper are to investigate the capability of genetic programming to select and extract linearly separable features when the evolutionary process is guided to achieve the same and to propose an integrated system for that.
Abstract: The objectives of this paper are to investigate the capability of genetic programming to select and extract linearly separable features when the evolutionary process is guided to achieve the same and to propose an integrated system for that. We decompose a $c$ -class problem into $c$ binary classification problems and evolve $c$ sets of binary classifiers employing a steady-state multiobjective genetic programming with three minimizing objectives. Each binary classifier is composed of a binary tree and a linear support vector machine (SVM). The features extracted by the feature nodes and some of the function nodes of the tree are used to train the SVM. The decision made by the SVM is considered the decision of the corresponding classifier. During crossover and mutation, the SVM-weights are used to determine the usefulness of the corresponding nodes. We also use a fitness function based on Golub’s index to select useful features. To discard less frequently used features, we employ unfitness functions for the feature nodes. We compare our method with 34 classification systems using 18 datasets. The performance of the proposed method is found to be better than 432 out of 570, i.e., 75.79% of comparing cases. Our results confirm that the proposed method is capable of achieving our objectives.

Proceedings ArticleDOI
07 Oct 2020
TL;DR: This paper proposed a method for unsupervised parsing based on the linguistic notion of a constituency test, which involves modifying the sentence via some transformation and then judging the result (e.g. checking if it is grammatical).
Abstract: We propose a method for unsupervised parsing based on the linguistic notion of a constituency test. One type of constituency test involves modifying the sentence via some transformation (e.g. replacing the span with a pronoun) and then judging the result (e.g. checking if it is grammatical). Motivated by this idea, we design an unsupervised parser by specifying a set of transformations and using an unsupervised neural acceptability model to make grammaticality decisions. To produce a tree given a sentence, we score each span by aggregating its constituency test judgments, and we choose the binary tree with the highest total score. While this approach already achieves performance in the range of current methods, we further improve accuracy by fine-tuning the grammaticality model through a refinement procedure, where we alternate between improving the estimated trees and improving the grammaticality model. The refined model achieves 62.8 F1 on the Penn Treebank test set, an absolute improvement of 7.6 points over the previously best published result.

Book ChapterDOI
TL;DR: In this paper, the authors presented a few concepts that have been used in the computation of tree-level scattering amplitudes (mostly using pure spinor methods) in a context that could be of interest to the combinatorics community.
Abstract: These notes are a written version of my talk given at the CARMA workshop in June 2017, with some additional material. I presented a few concepts that have recently been used in the computation of tree-level scattering amplitudes (mostly using pure spinor methods but not restricted to it) in a context that could be of interest to the combinatorics community. In particular, I focused on the appearance of {\it planar binary trees} in scattering amplitudes and presented some curious identities obeyed by related objects, some of which are known to be true only via explicit examples.

Journal ArticleDOI
01 May 2020-Heliyon
TL;DR: The erected Weighted Full Binary Tree-Sliced Binary Pattern analyzes an image in RGB-Dimensions based on patterns of Inter-Pixel Similarity by tracing the similarity path.

Journal ArticleDOI
TL;DR: In this article, it was shown that for tree tensor networks, the set of coarse-graining maps yielding discontinuous representations has full measure in the complete set of all isometries.
Abstract: Tree tensor network descriptions of critical quantum spin chains are empirically known to reproduce correlation functions matching conformal field theory (CFT) predictions in the continuum limit. It is natural to seek a more complete correspondence, additionally incorporating dynamics. On the CFT side, this is determined by a representation of the diffeomorphism group of the circle. In a remarkable series of papers, Jones outlined a research program where the Thompson group T takes the role of the latter in the discrete setting, and representations of T are constructed from certain elements of a subfactor planar algebra. He also showed that, for a particular example of such a construction, this approach only yields-in the continuum limit-a representation which is highly discontinuous and hence unphysical. Here we show that the same issue arises generically when considering tree tensor networks: the set of coarse-graining maps yielding discontinuous representations has full measure in the set of all isometries. This extends Jones's no-go example to typical elements of the so-called tensor planar algebra. We also identify an easily verified necessary condition for a continuous limit to exist. This singles out a particular class of tree tensor networks. Our considerations apply to recent approaches for introducing dynamics in holographic codes.

Posted Content
TL;DR: A recursive bi-partitioning algorithm is developed that divides the network into two communities based on the Fiedler vector of the unnormalized graph Laplacian and repeats the split until a stopping rule indicates no further community structures.
Abstract: We propose a generic network model, based on the Stochastic Block Model, to study the hierarchy of communities in real-world networks, under which the connection probabilities are structured in a binary tree. Under the network model, we show that the eigenstructure of the expected unnormalized graph Laplacian reveals the community structure of the network as well as the hierarchy of communities in a recursive fashion. Inspired by the nice property of the population eigenstructure, we develop a recursive bi-partitioning algorithm that divides the network into two communities based on the Fiedler vector of the unnormalized graph Laplacian and repeats the split until a stopping rule indicates no further community structures. We prove the weak and strong consistency of our algorithm for sparse networks with the expected node degree in $O(\log n)$ order, based on newly developed theory on $\ell_{2\rightarrow\infty}$ eigenspace perturbation, without knowing the total number of communities in advance. Unlike most of existing work, our theory covers multi-scale networks where the connection probabilities may differ in order of magnitude, which comprise an important class of models that are practically relevant but technically challenging to deal with. Finally we demonstrate the performance of our algorithm on synthetic data and real-world examples.