scispace - formally typeset
Search or ask a question
Author

Kun Xu

Other affiliations: IBM, Nanjing Medical University, Tencent  ...read more
Bio: Kun Xu is an academic researcher from Beijing University of Posts and Telecommunications. The author has contributed to research in topics: Radio over fiber & Photonics. The author has an hindex of 42, co-authored 580 publications receiving 6499 citations. Previous affiliations of Kun Xu include IBM & Nanjing Medical University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the effect of nanoparticle fraction on the microstructure and dielectric properties of composite films is investigated, which confirms that these ultimate sized nanocrystals can perform as superior high permittivity fillers in the nanocomposites for energy storage applications.

298 citations

Proceedings ArticleDOI
01 Sep 2015
TL;DR: This paper proposes to learn more robust relation representations from shortest dependency paths through a convolution neural network, takes the relation directionality into account and proposes a straightforward negative sampling strategy to improve the assignment of subjects and objects.
Abstract: Syntactic features play an essential role in identifying relationship in a sentence. Previous neural network models directly work on raw word sequences or constituent parse trees, thus often suffer from irrelevant information introduced when subjects and objects are in a long distance. In this paper, we propose to learn more robust relation representations from shortest dependency paths through a convolution neural network. We further take the relation directionality into account and propose a straightforward negative sampling strategy to improve the assignment of subjects and objects. Experimental results show that our method outperforms the state-of-theart approaches on the SemEval-2010 Task 8 dataset.

284 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The authors used a neural network based relation extractor to retrieve candidate answers from Freebase, and then infer over Wikipedia to validate these answers, achieving an F_1 of 53.3% on the WebQuestions question answering dataset.
Abstract: Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.

231 citations

Proceedings ArticleDOI
01 May 2019
TL;DR: The topic entity graph is introduced, a local sub-graph of an entity, to represent entities with their contextual information in KG, and a graph-attention based solution is proposed that outperforms previous state-of-the-art methods by a large margin.
Abstract: Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs. In this paper, we introduce the topic entity graph, a local sub-graph of an entity, to represent entities with their contextual information in KG. From this view, the KB-alignment task can be formulated as a graph matching problem; and we further propose a graph-attention based solution, which first matches all entities in two topic entity graphs, and then jointly model the local matching information to derive a graph-level matching vector. Experiments show that our model outperforms previous state-of-the-art methods by a large margin.

179 citations


Cited by
More filters
Proceedings Article
01 Jan 1999
TL;DR: In this paper, the authors describe photonic crystals as the analogy between electron waves in crystals and the light waves in artificial periodic dielectric structures, and the interest in periodic structures has been stimulated by the fast development of semiconductor technology that now allows the fabrication of artificial structures, whose period is comparable with the wavelength of light in the visible and infrared ranges.
Abstract: The term photonic crystals appears because of the analogy between electron waves in crystals and the light waves in artificial periodic dielectric structures. During the recent years the investigation of one-, two-and three-dimensional periodic structures has attracted a widespread attention of the world optics community because of great potentiality of such structures in advanced applied optical fields. The interest in periodic structures has been stimulated by the fast development of semiconductor technology that now allows the fabrication of artificial structures, whose period is comparable with the wavelength of light in the visible and infrared ranges.

2,722 citations

Posted Content
TL;DR: A detailed review over existing graph neural network models is provided, systematically categorize the applications, and four open problems for future research are proposed.
Abstract: Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics systems, learning molecular fingerprints, predicting protein interface, and classifying diseases demand a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures (like the dependency trees of sentences and the scene graphs of images) is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs. In recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning tasks. In this survey, we propose a general design pipeline for GNN models and discuss the variants of each component, systematically categorize the applications, and propose four open problems for future research.

2,494 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

01 Jan 2005
TL;DR: In “Constructing a Language,” Tomasello presents a contrasting theory of how the child acquires language: It is not a universal grammar that allows for language development, but two sets of cognitive skills resulting from biological/phylogenetic adaptations are fundamental to the ontogenetic origins of language.
Abstract: Child psychiatrists, pediatricians, and other child clinicians need to have a solid understanding of child language development. There are at least four important reasons that make this necessary. First, slowing, arrest, and deviation of language development are highly associated with, and complicate the course of, child psychopathology. Second, language competence plays a crucial role in emotional and mood regulation, evaluation, and therapy. Third, language deficits are the most frequent underpinning of the learning disorders, ubiquitous in our clinical populations. Fourth, clinicians should not confuse the rich linguistic and dialectal diversity of our clinical populations with abnormalities in child language development. The challenge for the clinician becomes, then, how to get immersed in the captivating field of child language acquisition without getting overwhelmed by its conceptual and empirical complexity. In the past 50 years and since the seminal works of Roger Brown, Jerome Bruner, and Catherine Snow, child language researchers (often known as developmental psycholinguists) have produced a remarkable body of knowledge. Linguists such as Chomsky and philosophers such as Grice have strongly influenced the science of child language. One of the major tenets of Chomskian linguistics (known as generative grammar) is that children’s capacity to acquire language is “hardwired” with “universal grammar”—an innate language acquisition device (LAD), a language “instinct”—at its core. This view is in part supported by the assertion that the linguistic input that children receive is relatively dismal and of poor quality relative to the high quantity and quality of output that they manage to produce after age 2 and that only an advanced, innate capacity to decode and organize linguistic input can enable them to “get from here (prelinguistic infant) to there (linguistic child).” In “Constructing a Language,” Tomasello presents a contrasting theory of how the child acquires language: It is not a universal grammar that allows for language development. Rather, human cognition universals of communicative needs and vocal-auditory processing result in some language universals, such as nouns and verbs as expressions of reference and predication (p. 19). The author proposes that two sets of cognitive skills resulting from biological/phylogenetic adaptations are fundamental to the ontogenetic origins of language. These sets of inherited cognitive skills are intentionreading on the one hand and pattern-finding, on the other. Intention-reading skills encompass the prelinguistic infant’s capacities to share attention to outside events with other persons, establishing joint attentional frames, to understand other people’s communicative intentions, and to imitate the adult’s communicative intentions (an intersubjective form of imitation that requires symbolic understanding and perspective-taking). Pattern-finding skills include the ability of infants as young as 7 months old to analyze concepts and percepts (most relevant here, auditory or speech percepts) and create concrete or abstract categories that contain analogous items. Tomasello, a most prominent developmental scientist with research foci on child language acquisition and on social cognition and social learning in children and primates, succinctly and clearly introduces the major points of his theory and his views on the origins of language in the initial chapters. In subsequent chapters, he delves into the details by covering most language acquisition domains, namely, word (lexical) learning, syntax, and morphology and conversation, narrative, and extended discourse. Although one of the remaining domains (pragmatics) is at the core of his theory and permeates the text throughout, the relative paucity of passages explicitly devoted to discussing acquisition and proBOOK REVIEWS

1,757 citations