scispace - formally typeset
Search or ask a question

Showing papers on "Phrase published in 2013"


Posted Content
TL;DR: This method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data and uses distributed representation of words and learns a linear mapping between vector spaces of languages.
Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.

1,564 citations


Book
19 Jul 2013
TL;DR: This chapter discusses language architecture, semantics, and metaphorical modes of expression in the context of a clause-based system.
Abstract: Preface PART I: THE CLAUSE Chapter 1: The architecture of language Chapter 2: Towards a functional grammar Chapter 3: Clause as message Chapter 4: Clause as exchange Chapter 5: Clause as representation PART II: ABOVE, BELOW AND BEYOND THE CLAUSE Chapter 6: Below the clause: groups and phases Chapter 7: Above the clause: the clause complex Chapter 8: Group and phrase complexes Chapter 9: Around the clause: cohesion and discourse Chapter 10: Beyond the clause: metaphorical modes of expression References Index

1,229 citations


Patent
03 Sep 2013
TL;DR: In this paper, a system for guiding a search for information is presented, which comprises a user interface that accepts a phrase and receives at least one suggestion based at least in part on the phrase.
Abstract: A system for guiding a search for information is presented. The system comprises a user interface that accepts a phrase and receives at least one suggestion based at least in part on the phrase. The system also includes a phrase suggestion engine that matches the phrase with the at least one suggestion. Methods of using the system are also provided.

265 citations


Patent
08 Jan 2013
TL;DR: In this article, a phrase-based modeling of generic structures of verbal interaction is proposed for the purpose of automating part of the design of grammar networks, which can regulate, control, and define the content and scope of human-machine interaction in natural language voice user interfaces.
Abstract: The invention enables creation of grammar networks that can regulate, control, and define the content and scope of human-machine interaction in natural language voice user interfaces (NLVUI). More specifically, the invention concerns a phrase-based modeling of generic structures of verbal interaction and use of these models for the purpose of automating part of the design of such grammar networks.

229 citations


Journal ArticleDOI
TL;DR: It is highlighted that many severely language-delayed children in the present sample attained phrase or fluent speech at or after age 4 years, which implicate the importance of evaluating and considering nonverbal skills, both cognitive and social, when developing interventions and setting goals for language development.
Abstract: OBJECTIVE: To examine the prevalence and predictors of language attainment in children with autism spectrum disorder (ASD) and severe language delay. We hypothesized greater autism symptomatology and lower intelligence among children who do not attain phrase/fluent speech, with nonverbal intelligence and social engagement emerging as the strongest predictors of outcome. METHODS: Data used for the current study were from 535 children with ASD who were at least 8 years of age (mean = 11.6 years, SD = 2.73 years) and who did not acquire phrase speech before age 4. Logistic and Cox proportionate hazards regression analyses examined predictors of phrase and fluent speech attainment and age at acquisition, respectively. RESULTS: A total of 372 children (70%) attained phrase speech and 253 children (47%) attained fluent speech at or after age 4. No demographic or child psychiatric characteristics were associated with phrase speech attainment after age 4, whereas slightly older age and increased internalizing symptoms were associated with fluent speech. In the multivariate analyses, higher nonverbal IQ and less social impairment were both independently associated with the acquisition of phrase and fluent speech, as well as earlier age at acquisition. Stereotyped behavior/repetitive interests and sensory interests were not associated with delayed speech acquisition. CONCLUSIONS: This study highlights that many severely language-delayed children in the present sample attained phrase or fluent speech at or after age 4 years. These data also implicate the importance of evaluating and considering nonverbal skills, both cognitive and social, when developing interventions and setting goals for language development.

213 citations


Book ChapterDOI
02 Dec 2013
TL;DR: For example, Weinert and Howarth as discussed by the authors argue that the main focus of the curriculum should be on productive and law-abiding syntax, rather than the rag-bag of idiomatic usage.
Abstract: Until comparatively recently, 'idiomatic' was something of a rag-bag category into which language teachers were apt to consign anything too awkward to be accounted for by the rules of syntax 'Idiomatic' would explain, for example, why we have main roads but cannot say this road is main, why our houses can be spick and span but not just spick, why we can say you idiot but not me idiot or you stupid These oddities were regarded as on the fringe of language, amusing but not nearly as important as the syntactic rules which regulate the largest part of language use Second language teachers might use up a few lighted-hearted classroom minutes giving learners a colourful idiomatic phrase of the day, especially of the fixed or metaphorical phrase variety (he let the cat out of the bag, come to think of it, you're telling me!) but the main focus of the lesson should be on productive and law-abiding syntax Although the notion that language is largely generated by a system of rules is still central to much second language acquisition research, there has been a shift of emphasis in some linguistic quarters (though not in most classrooms) away from the centrality of grammatical knowledge in language use and towards taking the rag-bag of 'idiomatic' usage far more seriously There is now a body of research in linguistics which studies the extent to which words operate in fully or partially fixed combinations as opposed to within a productive system of syntactic rules (see Weinert, 1995, for an excellent review, also Howarth, 1998) The use of fully or partially fixed combinations of words has been suggested as a processing strategy in both first and second language use which permits fluent and fast language production (Raupach, 1984; Pawley and Syder, 1983) It has been further suggested that this is also a learning strategy adopted by both first and second language learners whereby regularly encountered combinations of words are committed unanalysed to memory and then analysed for productive grammatical regularities (Ellis, 1996)

161 citations


Patent
11 Jul 2013
TL;DR: In this paper, a user can wake up a computing device operating in a low-power state and for the user to be verified by speaking a single wake phrase using a low power engine.
Abstract: Technologies are described herein that allow a user to wake up a computing device operating in a low-power state and for the user to be verified by speaking a single wake phrase. Wake phrase recognition is performed by a low-power engine. In some embodiments, the low-power engine may also perform speaker verification. In other embodiments, the mobile device wakes up after a wake phrase is recognized and a component other than the low-power engine performs speaker verification on a portion of the audio input comprising the wake phrase. More than one wake phrases may be associated with a particular user, and separate users may be associated with different wake phrases. Different wake phrases may cause the device transition from a low-power state to various active states.

136 citations


Journal ArticleDOI
01 Jan 2013-Lingua
TL;DR: This paper shows that the recursion-based conception implemented within Match Theory allows for a conceptually and empirically cleaner understanding of the phonological facts and generalizations in Japanese as well as for anUnderstanding of the respective roles of syntax and phonology in determining prosodic constituent structure organization, and the limitation in types of distinctions in prosodic category that are made in phonological representation.

122 citations


Journal ArticleDOI
TL;DR: This article demonstrates that the most common prosodic realization of focus can be subsumed typologically under the notion of alignment: a focused constituent is preferably aligned prosodically with the right or left edge of a prosodic domain the size of either a prosody phrase or an intonation phrase.
Abstract: This article demonstrates that the most common prosodic realization of focus can be subsumed typologically under the notion of alignment: a focused constituent is preferably aligned prosodically with the right or left edge of a prosodic domain the size of either a prosodic phrase or an intonation phrase. Languages have different strategies to fulfill alignment, some of which are illustrated in this paper: syntactic movement, cleft constructions, insertion of a prosodic boundary, and enhancement of existing boundaries. Additionally, morpheme insertion and pitch accent plus deaccenting can also be understood as ways of achieving alignment. None of these strategies is obligatory in any language. For a focus to be aligned is just a preference, not a necessary property, and higher-ranked constraints often block the fulfillment of alignment. A stronger focus, like a contrastive one, is more prone to be aligned than a weaker one, like an informational focus. Prominence, which has often been claimed to be the universal prosodic property of focus (see Truckenbrodt 2005 and Buring 2010 among others), may co-occur with alignment, as in the case of a right-aligned nuclear stress, but crucially, alignment is not equivalent to prominence. Rather, alignment is understood as a mean to separate constituents with different information structural roles in different prosodic domains, to ‘package’ them individually.

110 citations


Patent
Byoung-Ju Kim1, Prashant Desai1
06 Jun 2013
TL;DR: In this article, a method for voice activated search and control comprises converting, using an electronic device, multiple first speech signals into one or more first words, which are used for determining a first phrase contextually related to an application space.
Abstract: A method for voice activated search and control comprises converting, using an electronic device, multiple first speech signals into one or more first words. The one or more first words are used for determining a first phrase contextually related to an application space. The first phrase is used for performing a first action within the application space. Multiple second speech signals are converted, using the electronic device, into one or more second words. The one or more second words are used for determining a second phrase contextually related to the application space. The second phrase is used for performing a second action that is associated with a result of the first action within the application space.

102 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper proposes motion atom and phrase as a mid-level temporal ``part'' for representing and classifying complex action, and introduces a bottom-up phrase construction algorithm and a greedy selection method for this mining task.
Abstract: This paper proposes motion atom and phrase as a mid-level temporal ``part'' for representing and classifying complex action. Motion atom is defined as an atomic part of action, and captures the motion information of action video in a short temporal scale. Motion phrase is a temporal composite of multiple motion atoms with an AND/OR structure, which further enhances the discriminative ability of motion atoms by incorporating temporal constraints in a longer scale. Specifically, given a set of weakly labeled action videos, we firstly design a discriminative clustering method to automatically discover a set of representative motion atoms. Then, based on these motion atoms, we mine effective motion phrases with high discriminative and representative power. We introduce a bottom-up phrase construction algorithm and a greedy selection method for this mining task. We examine the classification performance of the motion atom and phrase based representation on two complex action datasets: Olympic Sports and UCF50. Experimental results show that our method achieves superior performance over recent published methods on both datasets.

Journal ArticleDOI
TL;DR: An overview of all aspects of Phrase Detectives, from the design of the game and the HLT methods the authors used to the results they have obtained so far, and summarizes the lessons that have been learned in developing this game which should help other researchers to design and implement similar games.
Abstract: We are witnessing a paradigm shift in Human Language Technology (HLT) that may well have an impact on the field comparable to the statistical revolution: acquiring large-scale resources by exploiting collective intelligence. An illustration of this new approach is Phrase Detectives, an interactive online game with a purpose for creating anaphorically annotated resources that makes use of a highly distributed population of contributors with different levels of expertise.The purpose of this article is to first of all give an overview of all aspects of Phrase Detectives, from the design of the game and the HLT methods we used to the results we have obtained so far. It furthermore summarizes the lessons that we have learned in developing this game which should help other researchers to design and implement similar games.

Proceedings ArticleDOI
11 Aug 2013
TL;DR: This paper proposes an algorithm for recursively constructing a hierarchy of topics from a collection of content-representative documents, characterized each topic in the hierarchy by an integrated ranked list of mixed-length phrases.
Abstract: A high quality hierarchical organization of the concepts in a dataset at different levels of granularity has many valuable applications such as search, summarization, and content browsing. In this paper we propose an algorithm for recursively constructing a hierarchy of topics from a collection of content-representative documents. We characterize each topic in the hierarchy by an integrated ranked list of mixed-length phrases. Our mining framework is based on a phrase-centric view for clustering, extracting, and ranking topical phrases. Experiments with datasets from three different domains illustrate our ability to generate hierarchies of high quality topics represented by meaningful phrases.

Proceedings Article
01 Jun 2013
TL;DR: This work combines logical and distributional representations of natural language meaning by transforming distributional similarity judgments into weighted inference rules using Markov Logic Networks (MLNs), and shows that distributional phrase similarity improves its performance.
Abstract: We combine logical and distributional representations of natural language meaning by transforming distributional similarity judgments into weighted inference rules using Markov Logic Networks (MLNs). We show that this framework supports both judging sentence similarity and recognizing textual entailment by appropriately adapting the MLN implementation of logical connectives. We also show that distributional phrase similarity, used as textual inference rules created on the fly, improves its performance.

Proceedings Article
01 Aug 2013
TL;DR: FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.
Abstract: The growing need for Chinese natural language processing (NLP) is largely in a range of research and commercial applications. However, most of the currently Chinese NLP tools or components still have a wide range of issues need to be further improved and developed. FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.

Patent
04 Nov 2013
TL;DR: In this paper, a natural language processing system is described, which includes a language decoder that generates information which is stored in a three-level framework (word, clause, phrase).
Abstract: A natural language processing system is disclosed herein. Embodiments of the NLP system perform hand-written rule-based operations that do not rely on a trained corpus. Rules can be added or modified at any time to improve accuracy of the system, and to allow the same system to operate on unstructured plain text from many disparate contexts (e.g. articles as well as twitter contexts as well as medical articles) without harming accuracy for any one context. Embodiments also include a language decoder (LD) that generates information which is stored in a three-level framework (word, clause, phrase). The LD output is easily leveraged by various software applications to analyze large quantities of text from any source in a more sophisticated and flexible manner than previously possible. A query language (LDQL) for information extraction from NLP parsers' output is disclosed, with emphasis on its embodiment implemented for LD. It is also presented, how to use LDQL for knowledge extraction on the example of application named Knowledge Browser.

Proceedings Article
01 Aug 2013
TL;DR: This work investigates whether integrating N-gram-based translation and reordering models into a phrase-based decoder helps overcome the problematic phrasal independence assumption, and shows that performance does significantly improve.
Abstract: The phrase-based and N-gram-based SMT frameworks complement each other. While the former is better able to memorize, the latter provides a more principled model that captures dependencies across phrasal boundaries. Some work has been done to combine insights from these two frameworks. A recent successful attempt showed the advantage of using phrasebased search on top of an N-gram-based model. We probe this question in the reverse direction by investigating whether integrating N-gram-based translation and reordering models into a phrase-based decoder helps overcome the problematic phrasal independence assumption. A large scale evaluation over 8 language pairs shows that performance does significantly improve.

Journal ArticleDOI
TL;DR: The role of individual differences in cognitive ability and their role in models and theories of language production are examined and a significant relationship between repair disfluencies and inhibition is revealed.

Journal ArticleDOI
TL;DR: In this article, prosodic features extracted from syllable, tri-syllable and multi-word (phrase) levels are proposed in addition to spectral features for capturing the language specific information.
Abstract: In this paper spectral and prosodic features extracted from different levels are explored for analyzing the language specific information present in speech. In this work, spectral features extracted from frames of 20 ms (block processing), individual pitch cycles (pitch synchronous analysis) and glottal closure regions are used for discriminating the languages. Prosodic features extracted from syllable, tri-syllable and multi-word (phrase) levels are proposed in addition to spectral features for capturing the language specific information. In this study, language specific prosody is represented by intonation, rhythm and stress features at syllable and tri-syllable (words) levels, whereas temporal variations in fundamental frequency (F 0 contour), durations of syllables and temporal variations in intensities (energy contour) are used to represent the prosody at multi-word (phrase) level. For analyzing the language specific information in the proposed features, Indian language speech database (IITKGP-MLILSC) is used. Gaussian mixture models are used to capture the language specific information from the proposed features. The evaluation results indicate that language identification performance is improved with combination of features. Performance of proposed features is also analyzed on standard Oregon Graduate Institute Multi-Language Telephone-based Speech (OGI-MLTS) database.

Patent
27 Dec 2013
TL;DR: In this article, a method on a mobile device for a wireless network is described, where an audio input is monitored for a trigger phrase spoken by a user of the mobile device after the trigger phrase is buffered.
Abstract: A method on a mobile device for a wireless network is described. An audio input is monitored for a trigger phrase spoken by a user of the mobile device. A command phrase spoken by the user after the trigger phrase is buffered. The command phrase corresponds to a call command and a call parameter. A set of target contacts associated with the mobile device is selected based on respective voice validation scores and respective contact confidence scores. The respective voice validation scores are based on the call parameter. The respective contact confidence scores are based on a user context associated with the user. A call to a priority contact of the set of target contacts is automatically placed if the voice validation score of the priority contact meets a validation threshold and the contact confidence score of the priority contact meets a confidence threshold.

Journal Article
TL;DR: This paper aims at analyzing a solution for the sentiment classification at a fine-grained level, namely the sentence level in which polarity of the sentence can be given by three categories as positive, negative and neutral.
Abstract: Sentiment classification is a way to analyze the subjective information in the text and then mine the opinion. Sentiment analysis is the procedure by which information is extracted from the opinions, appraisals and emotions of people in regards to entities, events and their attributes. In decision making, the opinions of others have a significant effect on customers ease, making choices with regards to online shopping, choosing events, products, entities. The approaches of text sentiment analysis typically work at a particular level like phrase, sentence or document level. This paper aims at analyzing a solution for the sentiment classification at a fine-grained level, namely the sentence level in which polarity of the sentence can be given by three categories as positive, negative and neutral.

Journal ArticleDOI
TL;DR: The development of an airway modulation model is described that simulates the time-varying changes of the glottis and vocal tract, as well as acoustic wave propagation, during speech production to create a type of artificial talker that can be used to study various aspects of how sound is generated by humans and how that sound is perceived by a listener.

Antti Suni, Daniel Aalto, Tuomo Raitio1, Paavo Alku, Martti Vainio 
01 Jan 2013
TL;DR: A system that uses wavelets to decompose the pitch contour into five temporal scales ranging from microprosody to the utterance level is presented, compared to a baseline where only one decision tree is trained to generate the pitch Contour.
Abstract: The pitch contour in speech contains information about different linguistic units at several distinct temporal scales. At the finest level, the microprosodic cues are purely segmental in nature, whereas in the coarser time scales, lexical tones, word accents, and phrase accents appear with both linguistic and paralinguistic functions. Consequently, the pitch movements happen on different temporal scales: the segmental perturbations are faster than typical pitch accents and so forth. In HMMbased speech synthesis paradigm, slower intonation patterns are not easy to model. The statistical procedure of decision tree clustering highlights instances that are more common, resulting in good reproduction of microprosody and declination, but with less variation on word and phrase level compared to human speech. Here we present a system that uses wavelets to decompose the pitch contour into five temporal scales ranging from microprosody to the utterance level. Each component is then individually trained within HMM framework and used in a superpositional manner at the synthesis stage. The resulting system is compared to a baseline where only one decision tree is trained to generate the pitch contour. Index Terms: HMM-based synthesis, intonation modeling, wavelet decomposition

MonographDOI
29 Nov 2013
TL;DR: 'Vocatives' proposes a formal syntactic approach to vocatives that focuses on the internal structure of vocatives phrases and on the mechanism through which a vocative phrase connects with the clause.
Abstract: 'Vocatives' proposes a formal syntactic approach to vocatives. The analysis focuses on the internal structure of vocatives phrases and on the mechanism through which a vocative phrase connects with the clause.

Journal ArticleDOI
TL;DR: This paper develops and evaluates an automatic keyphrase extraction system for scientific documents and shows the efficiency and effectiveness of the refined candidate set and demonstrates that the new features improve the accuracy of the system.
Abstract: Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.

Proceedings Article
01 Jun 2013
TL;DR: It is shown that simple features coupling phrase orientation to frequent words or wordclusters can improve translation quality, and that sparse decoder features outperform maximum entropy handily, indicating that there are major advantages to optimizing reordering features directly for BLEU with the decoder in the loop.
Abstract: There have been many recent investigations into methods to tune SMT systems using large numbers of sparse features. However, there have not been nearly so many examples of helpful sparse features, especially for phrasebased systems. We use sparse features to address reordering, which is often considered a weak point of phrase-based translation. Using a hierarchical reordering model as our baseline, we show that simple features coupling phrase orientation to frequent words or wordclusters can improve translation quality, with boosts of up to 1.2 BLEU points in ChineseEnglish and 1.8 in Arabic-English. We compare this solution to a more traditional maximum entropy approach, where a probability model with similar features is trained on wordaligned bitext. We show that sparse decoder features outperform maximum entropy handily, indicating that there are major advantages to optimizing reordering features directly for BLEU with the decoder in the loop.

Proceedings Article
01 Oct 2013
TL;DR: It is shown that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.
Abstract: Compared to the edited genres that have played a central role in NLP research, microblog texts use a more informal register with nonstandard lexical items, abbreviations, and free orthographic variation. When confronted with such input, conventional text analysis tools often perform poorly. Normalization — replacing orthographically or lexically idiosyncratic forms with more standard variants — can improve performance. We propose a method for learning normalization rules from machine translations of a parallel corpus of microblog messages. To validate the utility of our approach, we evaluate extrinsically, showing that normalizing English tweets and then translating improves translation quality (compared to translating unnormalized text) using three standard web translation services as well as a phrase-based translation system trained on parallel microblog data.

Proceedings Article
01 Oct 2013
TL;DR: Experiments on large-scale training data show that the two proposed lexical chain based cohesion models can substantially improve translation quality in terms of BLEU and that the probability cohesion model outperforms previous models based on lexical cohesion devices.
Abstract: Lexical chains provide a representation of the lexical cohesion structure of a text. In this paper, we propose two lexical chain based cohesion models to incorporate lexical cohesion into document-level statistical machine translation: 1) a count cohesion model that rewards a hypothesis whenever a chain word occurs in the hypothesis, 2) and a probability cohesion model that further takes chain word translation probabilities into account. We compute lexical chains for each source document to be translated and generate target lexical chains based on the computed source chains via maximum entropy classifiers. We then use the generated target chains to provide constraints for word selection in document-level machine translation through the two proposed lexical chain based cohesion models. We verify the effectiveness of the two models using a hierarchical phrase-based translation system. Experiments on large-scale training data show that they can substantially improve translation quality in terms of BLEU and that the probability cohesion model outperforms previous models based on lexical cohesion devices.

Journal ArticleDOI
TL;DR: A web-based ICALL system for German that provides error-specific feedback suited to learner expertise and a study that supports the need for a CALL system that addresses multiple errors by considering language teaching pedagogy is described.
Abstract: This paper describes a web-based ICALL system for German that provides error-specific feedback suited to learner expertise. The main focus of the discussion is on the Domain Knowledge and the Filtering Module. The Domain Knowledge represents the knowledge of linguistic rules and vocabulary, and its goal is to parse sentences and phrases to produce sets of phrase descriptors. Phrase descriptors provide very detailed information on the types of errors and their location in the sentence. The Filtering Module is responsible for processing multiple learner errors. Motivated by pedagogical and linguistic design decisions, the Filtering Module ranks student errors by way of an Error Priority Queue. The Error Priority Queue is flexible: the grammar constraints can be reordered to reflect the desired emphasis of a particular exercise. In addition, a language instructor might choose not to report some errors. The paper concludes with a study that supports the need for a CALL system that addresses multiple errors by considering language teaching pedagogy.

Book ChapterDOI
TL;DR: A theoretical account of relational interpretation of combined concepts is presented and empirical evidence supporting the theoretical account’s specific predictions about how relational interpretations are selected and evaluated are presented and how the relational interpretation is elaborated to create a fully specified new concept.
Abstract: Compositionality and productivity, which are the abilities to combining existing concepts and words to create new concepts and phrases, words, and sentences, are hallmarks of the human conceptual and language systems. Combined concepts are formed within the conceptual system and can be expressed via modifier-noun phrases (e.g. purple beans ) and compound words (e.g. snowball ), which are the simplest forms of productivity. Modifier-noun phrases and compound words are often paraphrased using a relation to connect the constituents (e.g. beans that are purple, ball made of snow ). The phrase or compound does not explicitly contain the underlying relation, but the existence of the relation can be shown by manipulating the availability of the relation and observing the effect on the interpretation of the phrase or compound. This chapter describes how novel modifier-noun phrases and established compounds are interpreted. We present a theoretical account of relational interpretation of combined concepts and present the empirical evidence for the use of relational structures. We then present the empirical evidence supporting our theoretical account’s specific predictions about how relational interpretations are selected and evaluated and how the relational interpretation is elaborated to create a fully specified new concept.