Author
Vrashabh Prasad Jain
Bio: Vrashabh Prasad Jain is an academic researcher from Mahatma Gandhi Antarrashtriya Hindi Vishwavidyalaya. The author has contributed to research in topic(s): Autoencoder & Deep learning. The author has an hindex of 1, co-authored 1 publication(s) receiving 4 citation(s).
Topics: Autoencoder, Deep learning, Word2vec, Cluster analysis
Papers
More filters
21 Dec 2018
TL;DR: A deep learning based approach to assign POS tags to words in a piece of text given to it as input and uses the untagged Sanskrit Corpus prepared by JNU for the tag assignment purpose and determining model accuracy.
Abstract: In this paper, we present a deep learning based approach to assign POS tags to words in a piece of text given to it as input. We propose an unsupervised approach owing to the lack of a large Sanskrit annotated corpora and use the untagged Sanskrit Corpus prepared by JNU for our purpose. The only tagged corpora for Sanskrit is created by JNU which has 115,000 words which are not sufficient to apply supervised deep learning approaches. For the tag assignment purpose and determining model accuracy, we utilize this tagged corpus. We explore various methods through which each Sanskrit word can be represented as a point multi-dimensional vector space whose position accurately captures its meaning and semantic information associated with it. We also explore other data sources to improve performance and robustness of the vector representations. We use these rich vector representations and explore autoencoder based approaches for dimensionality reduction to compress these into encodings which are suitable for clustering in the vector space. We experiment with different dimensions of these compressed representations and present one which was found to offer the best clustering performance. For modelling the sequence in order to preserve the semantic information we feed these embeddings to a bidirectional LSTM autoencoder. We assign a POS tag to each of the obtained clusters and produce our result by testing the model on the tagged corpus.
4 citations
Cited by
More filters
01 Jan 2020
TL;DR: Here, 328 Sanskrit words are tested through four morphological analyzers namely—Samsaadhanii, morphological Analyzers by JNU and TDIL, both of which are available online and locally developed and installed Sanguj morphological analyzezer.
Abstract: In linguistics, morphology is a study regarding word, word formation, its analysis, and generation. A morphological analyzer is a tool to understand grammatical characteristics and constituent’s part-of-speech information. A morphological analyzer is a useful tool in many NLP implementations such as syntactic parser, spell checker, information retrieval, and machine translation. Here, 328 Sanskrit words are tested through four morphological analyzers namely—Samsaadhanii, morphological analyzers by JNU and TDIL, both of which are available online and locally developed and installed Sanguj morphological analyzer. There is a negligible divergence in the reflected results.
1 citations
09 Oct 2020
TL;DR: A model is advocated that employs a deep learning method to train the LSTM (Long Short Term Memory) neural network trained over a massive data set to fulfill the necessary categorisation, using a context-based retention of the data attained through Word2Vec along with the TensorFlow and Keras packages.
Abstract: Language is the most fundamental and historically normal means of communication today. Grammar plays a critical role in the excellence of a language. As individuals have already been educated throughout our existence with an accumulation of knowledge that is accrued, mastered over time with guidelines and a restriction of significance that allows us to comprehend and interact one another. But also to translate such awareness into a computer, to be capable of interpreting and classifying contextual evidence into a proper syntactical form, thereby validating that the information was in the correct form, is incredibly necessary at the current time since it is a sophisticated activity. The paper addresses the issue and asserts the advancement of such grammar verifying mechanism for the Dravidian language Kannada. Among the first account would be that the intricacy of the language poses a problem and preferring to have a rule based stance is an easier route and makes it possible to identify detected flaws competently. It takes a linguistic specialist to compile hundreds of parallel standards that are difficult to preserve. Here, a model is advocated that employs a deep learning method to train the LSTM (Long Short Term Memory) neural network trained over a massive data set to fulfill the necessary categorisation, using a context-based retention of the data attained through Word2Vec along with the TensorFlow and Keras packages. The proposed system is able to perform Grammatical Error Detection (GED) effectively.
TL;DR: In this paper, a deep neural network model was proposed to improve the accuracy of parts-of-speech tagging in low-resource languages, such as Assamese and English.
Abstract: Over the years, many different algorithms are proposed to improve the accuracy of the automatic parts of speech tagging. High accuracy of parts of speech tagging is very important for any NLP application. Powerful models like The Hidden Markov Model (HMM), used for this purpose require a huge amount of training data and are also less accurate to detect unknown (untrained) words. Most of the languages in this world lack enough resources in the computable form to be used during training such models. NLP applications for such languages also encounter many unknown words during execution. This results in a low accuracy rate. Improving accuracy for such low-resource languages is an open problem. In this paper, one stochastic method and a deep learning model are proposed to improve accuracy for such languages. The proposed language-independent methods improve unknown word accuracy and overall accuracy with a low amount of training data. At first, bigrams and trigrams of characters that are already part of training samples are used to calculate the maximum likelihood for tagging unknown words using the Viterbi algorithm and HMM. With training datasets below the size of 10K, an improvement of 12% to 14% accuracy has been achieved. Next, a deep neural network model is also proposed to work with a very low amount of training data. It is based on word level, character level, character bigram level, and character trigram level representations to perform parts of speech tagging with less amount of available training data. The model improves the overall accuracy of the tagger along with improving accuracy for unknown words. Results for “English” and a low resource Indian Language “Assamese” are discussed in detail. Performance is better than many state-of-the-art techniques for low resource language. The method is generic and can be used with any language with very less amount of training data.