scispace - formally typeset

Book ChapterDOI

Processing Large Text Corpus Using N-Gram Language Modeling and Smoothing

01 Jan 2021-pp 21-32

AbstractThe prediction of next word, letter or phrase for the user, while she is typing, is a really valuable tool for improving user experience. The users are communicating, writing reviews and expressing their opinion on such platforms frequently and many times while moving. It has become necessary to provide the user with an application that can reduce typing effort and spelling errors when they have limited time. The text data is getting larger in size due to the extensive use of all kinds of social media platforms and so implementation of text prediction application is difficult considering the size of text data to be processed for language modeling. This research paper’s primary objective is processing large text corpus and implementing a probabilistic model like N-grams to predict the next word when the user provides input. In this exploratory research, n-gram models are discussed and evaluated using Good Turing Estimation, perplexity measure and type-to-token ratio.

...read more


Citations
More filters
Journal ArticleDOI
Abstract: A Quranic optical character recognition (OCR) system based on convolutional neural network (CNN) followed by recurrent neural network (RNN) is introduced in this work. Six deep learning models are built to study the effect of different representations of the input and output, and the accuracy and performance of the models, and compare long short-term memory (LSTM) and gated recurrent unit (GRU). A new Quranic OCR dataset is developed based on the most famous printed version of the Holy Quran (Mushaf Al-Madinah), and a page and line-text image with the corresponding labels is prepared. This work’s contribution is a Quranic OCR model capable of recognizing the Quranic image’s diacritic text. A better performance in word recognition rate (WRR) and character recognition rate (CRR) is achieved in the experiments. The LSTM and GRU are compared in the Arabic text recognition domain. In addition, a public database is built for research purposes in Arabic text recognition that contains the diacritics and the Uthmanic script, and is large enough to be used with the deep learning models. The outcome of this work shows that the proposed system obtains an accuracy of 98% on the validation data, and a WRR of 95% and a CRR of 99% in the test dataset.

1 citations

Book ChapterDOI
01 Jan 2021
Abstract: The prime provocation during evolvement of latest routing protocol for opportunistic networks (Oppnets) is the arbitrary behavior of network topology. The fulfillment of already present routing protocols is limited by constituents such as irregular connectivity and bounded bandwidth. The gateway access protocol (GAP) is introduced which is a latest factor cognizant routing protocol. The proposed protocol proficiently merges the assistance of probability-based routing along with genetic algorithm for providing direction to message, to let it reach the target. The genetic algorithm is used by this protocol for anticipating the route, the information would prefer when relocated to the adjacent node. The effectiveness of speculated route is estimated by fitness function. The condition is that information is only forwarded on to the nearest node if the threshold value is lesser than the fitness value for the anticipated route. The value of threshold is determined by enacting the postulates present in probabilistic routing. The facsimile outcome depicts that gateway access protocol of routing surpass PRoPHET, geographic and energy aware routing (GAER) protocol, and Spray and Wait as to conveyance ratio of message, standard latency along with overhead ratio.

References
More filters
Book
01 Jan 1950
TL;DR: If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Abstract: I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.

6,121 citations

Journal ArticleDOI
01 Oct 1950-Mind

5,949 citations

Book
01 Jan 2000
TL;DR: This book takes an empirical approach to language processing, based on applying statistical and other machine-learning algorithms to large corpora, to demonstrate how the same algorithm can be used for speech recognition and word-sense disambiguation.
Abstract: From the Publisher: This book takes an empirical approach to language processing, based on applying statistical and other machine-learning algorithms to large corpora.Methodology boxes are included in each chapter. Each chapter is built around one or more worked examples to demonstrate the main idea of the chapter. Covers the fundamental algorithms of various fields, whether originally proposed for spoken or written language to demonstrate how the same algorithm can be used for speech recognition and word-sense disambiguation. Emphasis on web and other practical applications. Emphasis on scientific evaluation. Useful as a reference for professionals in any of the areas of speech and language processing.

3,602 citations

Journal ArticleDOI
TL;DR: This work surveys the most widely-used algorithms for smoothing models for language n -gram modeling, and presents an extensive empirical comparison of several of these smoothing techniques, including those described by Jelinek and Mercer (1980), and introduces methodologies for analyzing smoothing algorithm efficacy in detail.
Abstract: We survey the most widely-used algorithms for smoothing models for language n -gram modeling. We then present an extensive empirical comparison of several of these smoothing techniques, including those described by Jelinek and Mercer (1980); Katz (1987); Bell, Cleary and Witten (1990); Ney, Essen and Kneser (1994), and Kneser and Ney (1995). We investigate how factors such as training data size, training corpus (e.g. Brown vs. Wall Street Journal), count cutoffs, and n -gram order (bigram vs. trigram) affect the relative performance of these methods, which is measured through the cross-entropy of test data. We find that these factors can significantly affect the relative performance of models, with the most significant factor being training data size. Since no previous comparisons have examined these factors systematically, this is the first thorough characterization of the relative performance of various algorithms. In addition, we introduce methodologies for analyzing smoothing algorithm efficacy in detail, and using these techniques we motivate a novel variation of Kneser?Ney smoothing that consistently outperforms all other algorithms evaluated. Finally, results showing that improved language model smoothing leads to improved speech recognition performance are presented.

1,908 citations

Proceedings ArticleDOI
Reinhard Kneser1, Hermann Ney
09 May 1995
TL;DR: This paper proposes to use distributions which are especially optimized for the task of back-off, which are quite different from the probability distributions that are usually used for backing-off.
Abstract: In stochastic language modeling, backing-off is a widely used method to cope with the sparse data problem. In case of unseen events this method backs off to a less specific distribution. In this paper we propose to use distributions which are especially optimized for the task of backing-off. Two different theoretical derivations lead to distributions which are quite different from the probability distributions that are usually used for backing-off. Experiments show an improvement of about 10% in terms of perplexity and 5% in terms of word error rate.

1,708 citations