scispace - formally typeset
Search or ask a question
Author

Chenjie Gu

Bio: Chenjie Gu is an academic researcher from Intel. The author has contributed to research in topics: Mixed-signal integrated circuit & Electron–positron annihilation. The author has an hindex of 23, co-authored 71 publications receiving 1717 citations. Previous affiliations of Chenjie Gu include Google & University of Minnesota.


Papers
More filters
Proceedings Article
Yujia Li1, Chenjie Gu1, Thomas Dullien1, Oriol Vinyals1, Pushmeet Kohli1 
24 May 2019
TL;DR: This paper proposes a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism.
Abstract: This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.

281 citations

Journal ArticleDOI
TL;DR: QLMOR demonstrates that Volterra-kernel based nonlinear MOR techniques can in fact have far broader applicability than previously suspected, possibly being competitive with trajectory-based methods (e.g., trajectory piece-wise linear reduced order modeling) and nonlinear-projection based methods ( e.g, maniMOR).
Abstract: We present a projection-based nonlinear model order reduction method, named model order reduction via quadratic-linear systems (QLMOR). QLMOR employs two novel ideas: 1) we show that nonlinear ordinary differential equations, and more generally differential-algebraic equations (DAEs) with many commonly encountered nonlinear kernels can be rewritten equivalently in a special representation, quadratic-linear differential algebraic equations (QLDAEs), and 2) we perform a Volterra analysis to derive the Volterra kernels, and we adapt the moment-matching reduction technique of nonlinear model order reduction method (NORM) [1] to reduce these QLDAEs into QLDAEs of much smaller size. Because of the generality of the QLDAE representation, QLMOR has significantly broader applicability than Taylor-expansion-based methods [1]-[3] since there is no approximation involved in the transformation from original DAEs to QLDAEs. Because the reduced model has only quadratic nonlinearities, its computational complexity is less than that of similar prior methods. In addition, QLMOR, unlike NORM, totally avoids explicit moment calculations, hence it has improved numerical stability properties as well. We compare QLMOR against prior methods [1]-[3] on a circuit and a biochemical reaction-like system, and demonstrate that QLMOR-reduced models retain accuracy over a significantly wider range of excitation than Taylor-expansion-based methods [1]-[3]. QLMOR, therefore, demonstrates that Volterra-kernel based nonlinear MOR techniques can in fact have far broader applicability than previously suspected, possibly being competitive with trajectory-based methods (e.g., trajectory piece-wise linear reduced order modeling [4]) and nonlinear-projection based methods (e.g., maniMOR [5]).

173 citations

Posted Content
TL;DR: The Differentiable Digital Signal Processing library is introduced, which enables direct integration of classic signal processing elements with deep learning methods and achieves high-fidelity generation without the need for large autoregressive models or adversarial losses.
Abstract: Most generative models of audio directly generate samples in one of two domains: time or frequency. While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived. A third approach (vocoders/synthesizers) successfully incorporates strong domain knowledge of signal processing and perception, but has been less actively researched due to limited expressivity and difficulty integrating with modern auto-differentiation-based machine learning methods. In this paper, we introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods. Focusing on audio synthesis, we achieve high-fidelity generation without the need for large autoregressive models or adversarial losses, demonstrating that DDSP enables utilizing strong inductive biases without losing the expressive power of neural networks. Further, we show that combining interpretable modules permits manipulation of each separate model component, with applications such as independent control of pitch and loudness, realistic extrapolation to pitches not seen during training, blind dereverberation of room acoustics, transfer of extracted room acoustics to new environments, and transformation of timbre between disparate sources. In short, DDSP enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning. The library is publicly available at this https URL and we welcome further contributions from the community and domain experts.

117 citations

Posted Content
Yujia Li1, Chenjie Gu1, Thomas Dullien1, Oriol Vinyals1, Pushmeet Kohli1 
TL;DR: In this article, a cross-graph attention-based matching mechanism is proposed for retrieval and matching of graph structured objects, which can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning.
Abstract: This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.

101 citations

Journal ArticleDOI
O. Adriani1, M. Aguilar-Benitez, Steven Ahlen2, J. Alcaraz3  +509 moreInstitutions (37)
TL;DR: In this paper, neutral heavy leptons that are isosinglets under the standard SU (2)L gauge group were searched for and no evidence for a signal has been found and the limit Br(NSu0 → vlNl) < 3 x 10−5 at the 95% CL for mass range from 3 GeV up to Mz.

99 citations


Cited by
More filters
Journal ArticleDOI
Claude Amsler1, Michael Doser2, Mario Antonelli, D. M. Asner3  +173 moreInstitutions (86)
TL;DR: This biennial Review summarizes much of particle physics, using data from previous editions.

12,798 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: The Pythia program as mentioned in this paper can be used to generate high-energy-physics ''events'' (i.e. sets of outgoing particles produced in the interactions between two incoming particles).
Abstract: The Pythia program can be used to generate high-energy-physics ''events'', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions, within and beyond the Standard Model, with emphasis on those where strong interactions play a role, directly or indirectly, and therefore multihadronic final states are produced. The physics is then not understood well enough to give an exact description; instead the program has to be based on a combination of analytical results and various QCD-based models. This physics input is summarized here, for areas such as hard subprocesses, initial- and final-state parton showers, underlying events and beam remnants, fragmentation and decays, and much more. Furthermore, extensive information is provided on all program elements: subroutines and functions, switches and parameters, and particle and process data. This should allow the user to tailor the generation task to the topics of interest.

6,300 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe the rules of the ring, the ring population, and the need to get off the ring in order to measure the movement of a cyclic clock.
Abstract: 1980 Preface * 1999 Preface * 1999 Acknowledgements * Introduction * 1 Circular Logic * 2 Phase Singularities (Screwy Results of Circular Logic) * 3 The Rules of the Ring * 4 Ring Populations * 5 Getting Off the Ring * 6 Attracting Cycles and Isochrons * 7 Measuring the Trajectories of a Circadian Clock * 8 Populations of Attractor Cycle Oscillators * 9 Excitable Kinetics and Excitable Media * 10 The Varieties of Phaseless Experience: In Which the Geometrical Orderliness of Rhythmic Organization Breaks Down in Diverse Ways * 11 The Firefly Machine 12 Energy Metabolism in Cells * 13 The Malonic Acid Reagent ('Sodium Geometrate') * 14 Electrical Rhythmicity and Excitability in Cell Membranes * 15 The Aggregation of Slime Mold Amoebae * 16 Numerical Organizing Centers * 17 Electrical Singular Filaments in the Heart Wall * 18 Pattern Formation in the Fungi * 19 Circadian Rhythms in General * 20 The Circadian Clocks of Insect Eclosion * 21 The Flower of Kalanchoe * 22 The Cell Mitotic Cycle * 23 The Female Cycle * References * Index of Names * Index of Subjects

3,424 citations