Analysis of Sanskrit Text: Parsing and Semantic Relations
read more
Citations
Sanskrit Word Segmentation Using Character-level Recurrent and Convolutional Neural Networks
Formal Structure of Sanskrit Text: Requirements Analysis for a Mechanical Sanskrit Processor
Design and analysis of a lean interface for Sanskrit corpus annotation
A Deterministic Dependency Parser with Dynamic Programming for Sanskrit
Extracting Dependency Trees from Sanskrit Texts
References
Introduction to Automata Theory, Languages, and Computation
Planning as heuristic search
Recognition of visual activities and interactions by stochastic parsing
Parsing Free Word Order Languages in the Paninian Framework
A functional toolkit for morphological and phonological processing, application to a Sanskrit tagger
Related Papers (5)
Frequently Asked Questions (7)
Q2. What are the future works mentioned in the paper "Analysis of sanskrit text : parsing and semantic relations" ?
Hence future works in this direction include parsing of compound sentences and incorporating Stochastic parsing. The authors are trying to come up with a good enough lexicon so that they can work in the direction of y ? ? in Sanskrit sentences.
Q3. What is the purpose of the proposed Sanskrit parser?
Although computational processing of Sanskrit language has been reported in the literature (Huet, 2005) with some computational toolkits (Huet, 2002), and there is work going on towards developing mathematical model and dependency grammar of Sanskrit(Huet, 2006), the proposed Sanskrit parser is being developed for using Sanskrit language as Indian networking language (INL).
Q4. What are the 9 classes of pronouns in Sanskrit?
The authors have classified each of these pronouns into 9 classes: Personal, Demonstrative, Relative, Indefinitive, Correlative, Reciprocal and Possessive.
Q5. Why is the morphological analyzer used for the analysis of Sanskrit words?
While evaluating the Sanskrit words in the sentence, the authors have followed these steps for computation:1. First, a left-right parsing to separate out the words in the sentence is done.
Q6. What is the way to get rid of the blocking?
If the algorithm is able to generate a parse taking the longest possible match, the authors will not go into stacked possibilities, but if the subject disagrres with the verb (blocking), or some other mismatch is found, the authors will have to go for stacked possibilities.
Q7. What is the way to analyze Sanskrit?
The paninian framework has been successfully applied to Indian languages for dependency grammars (Sangal, 1993), where constraint based parsing is used and mapping between karaka and vibhakti is via a TAM (tense, aspect, modality) tabel.