scispace - formally typeset
Search or ask a question
Topic

Adjacency list

About: Adjacency list is a research topic. Over the lifetime, 4419 publications have been published within this topic receiving 78449 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The local chainlike linear growth induced by grammar and style is identified as a missing element in the Dorogovtsev-Mendes model and extended by incorporating such effects and a satisfactory agreement with the empirical result is obtained.
Abstract: We investigate properties of evolving linguistic networks defined by the word-adjacency relation. Such networks belong to the category of networks with accelerated growth but their shortest-path length appears to reveal the network size dependence of different functional form than the ones known so far. We thus compare the networks created from literary texts with their artificial substitutes based on different variants of the Dorogovtsev-Mendes model and observe that none of them is able to properly simulate the novel asymptotics of the shortest-path length. Then, we identify the local chainlike linear growth induced by grammar and style as a missing element in this model and extend it by incorporating such effects. It is in this way that a satisfactory agreement with the empirical result is obtained.

30 citations

Posted Content
TL;DR: A scalable Bayesian model for low-rank factorization of massive tensors with binary observations using a zero-truncated Poisson likelihood for binary data, achieving excellent computational scalability, and demonstrating its usefulness in leveraging side-information provided in form of mode-network(s).
Abstract: We present a scalable Bayesian model for low-rank factorization of massive tensors with binary observations. The proposed model has the following key properties: (1) in contrast to the models based on the logistic or probit likelihood, using a zero-truncated Poisson likelihood for binary data allows our model to scale up in the number of \emph{ones} in the tensor, which is especially appealing for massive but sparse binary tensors; (2) side-information in form of binary pairwise relationships (e.g., an adjacency network) between objects in any tensor mode can also be leveraged, which can be especially useful in "cold-start" settings; and (3) the model admits simple Bayesian inference via batch, as well as \emph{online} MCMC; the latter allows scaling up even for \emph{dense} binary data (i.e., when the number of ones in the tensor/network is also massive). In addition, non-negative factor matrices in our model provide easy interpretability, and the tensor rank can be inferred from the data. We evaluate our model on several large-scale real-world binary tensors, achieving excellent computational scalability, and also demonstrate its usefulness in leveraging side-information provided in form of mode-network(s).

30 citations

Journal ArticleDOI
TL;DR: A preliminary planning module developed as a part of a whole CAPP system is described in this paper, which deals with sequencing at the form-feature level.

30 citations

Posted Content
TL;DR: This work develops the first deep learning model for hierarchical segmentation of 3D shapes, based on recursive neural networks, and segments a 3D shape in point cloud into an arbitrary number of parts, depending on the shape complexity, showing strong generality and flexibility.
Abstract: Deep learning approaches to 3D shape segmentation are typically formulated as a multi-class labeling problem. Existing models are trained for a fixed set of labels, which greatly limits their flexibility and adaptivity. We opt for top-down recursive decomposition and develop the first deep learning model for hierarchical segmentation of 3D shapes, based on recursive neural networks. Starting from a full shape represented as a point cloud, our model performs recursive binary decomposition, where the decomposition network at all nodes in the hierarchy share weights. At each node, a node classifier is trained to determine the type (adjacency or symmetry) and stopping criteria of its decomposition. The features extracted in higher level nodes are recursively propagated to lower level ones. Thus, the meaningful decompositions in higher levels provide strong contextual cues constraining the segmentations in lower levels. Meanwhile, to increase the segmentation accuracy at each node, we enhance the recursive contextual feature with the shape feature extracted for the corresponding part. Our method segments a 3D shape in point cloud into an unfixed number of parts, depending on the shape complexity, showing strong generality and flexibility. It achieves the state-of-the-art performance, both for fine-grained and semantic segmentation, on the public benchmark and a new benchmark of fine-grained segmentation proposed in this work. We also demonstrate its application for fine-grained part refinements in image-to-shape reconstruction.

30 citations

Journal ArticleDOI
TL;DR: This paper presents a graph-based representation for biomedical articles and uses graph kernels to classify those articles into high-level categories and attempts to classify the graphs using both a set-based graph kernel that is capable of dealing with the disconnected nature of the graphs and a simple linear kernel.
Abstract: Recently, graph representations of text have been showing improved performance over conventional bag-of-words representations in text categorization applications. In this paper, we present a graph-based representation for biomedical articles and use graph kernels to classify those articles into high-level categories. In our representation, common biomedical concepts and semantic relationships are identified with the help of an existing ontology and are used to build a rich graph structure that provides a consistent feature set and preserves additional semantic information that could improve a classifier's performance. We attempt to classify the graphs using both a set-based graph kernel that is capable of dealing with the disconnected nature of the graphs and a simple linear kernel. Finally, we report the results comparing the classification performance of the kernel classifiers to common text-based classifiers.

30 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
82% related
Probabilistic logic
56K papers, 1.3M citations
82% related
Cluster analysis
146.5K papers, 2.9M citations
81% related
Matrix (mathematics)
105.5K papers, 1.9M citations
81% related
Robustness (computer science)
94.7K papers, 1.6M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023209
2022439
2021283
2020280
2019296
2018232