scispace - formally typeset
Search or ask a question
Author

Balakrishnan Kannan

Bio: Balakrishnan Kannan is an academic researcher from Cochin University of Science and Technology. The author has contributed to research in topics: CVAR & Bayesian probability. The author has an hindex of 7, co-authored 16 publications receiving 148 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, betweenness centrality is defined as a measure of the influence of a vertex over the flow of information between every pair of vertices under the assumption that information primarily flows over the shortest paths between them.
Abstract: There are several centrality measures that have been introduced and studied for real-world networks. They account for the different vertex characteristics that permit them to be ranked in order of importance in the network. Betweenness centrality is a measure of the influence of a vertex over the flow of information between every pair of vertices under the assumption that information primarily flows over the shortest paths between them. In this paper we present betweenness centrality of some important classes of graphs.

44 citations

Posted Content
TL;DR: This paper provides an overview of offline handwritten character recognition in South Indian Scripts, namely Malayalam, Tamil, Kannada and Telungu.
Abstract: Handwritten character recognition is always a frontier area of research in the field of pattern recognition and image processing and there is a large demand for OCR on hand written documents. Even though, sufficient studies have performed in foreign scripts like Chinese, Japanese and Arabic characters, only a very few work can be traced for handwritten character recognition of Indian scripts especially for the South Indian scripts. This paper provides an overview of offline handwritten character recognition in South Indian Scripts, namely Malayalam, Tamil, Kannada and Telungu.

26 citations

Book ChapterDOI
23 Jul 2010
TL;DR: The aim of this paper is to predict the Learning Disabilities of school-age children using decision tree, a powerful and popular tool for classification and prediction in Data mining.
Abstract: The aim of this paper is to predict the Learning Disabilities (LD) of school-age children using decision tree. Decision trees are powerful and popular tool for classification and prediction in Data mining. Different rules extracted from the decision tree are used for prediction of learning disabilities. LDs affect about 10 percent of all children enrolled in schools. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. This paper highlights the data mining technique – decision tree, used for classification and extraction of rules for prediction of learning disabilities. As per the formulated rules, LD in any child can be identified.

19 citations

Journal ArticleDOI
TL;DR: This study proposes a closest fit algorithm data preprocessing with ANN classification to handle missing attribute values and uses an algorithm in a systematic approach for classification, which gives a satisfactory result in the prediction of LD.
Abstract: Learning disability (LD) is a neurological condition that affects a child’s brain and impairs his ability to carry out one or many specific tasks. LD affects about 10% of children enrolled in schools. There is no cure for learning disabilities and they are lifelong. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. Just as there are many different types of LDs, there are a variety of tests that may be done to pinpoint the problem The information gained from an evaluation is crucial for finding out how the parents and the school authorities can provide the best possible learning environment for child. This paper proposes a new approach in artificial neural network (ANN) for identifying LD in children at early stages so as to solve the problems faced by them and to get the benefits to the students, their parents and school authorities. In this study, we propose a closest fit algorithm data preprocessing with ANN classification to handle missing attribute values. This algorithm imputes the missing values in the preprocessing stage. Ignoring of missing attribute values is a common trend in all classifying algorithms. But, in this paper, we use an algorithm in a systematic approach for classification, which gives a satisfactory result in the prediction of LD. It acts as a tool for predicting the LD accurately, and good information of the child is made available to the concerned.

17 citations

Journal ArticleDOI
TL;DR: In this article, a matrix-variate Bayesian CVAR mixture model is proposed to incorporate estimation of model parameters in the presence of price series level shifts which are not accurately modeled in the standard Gaussian error correction model.
Abstract: We consider a statistical model for pairs of traded assets, based on a Cointegrated Vector Auto Regression (CVAR) Model. We extend standard CVAR models to incorporate estimation of model parameters in the presence of price series level shifts which are not accurately modeled in the standard Gaussian error correction model (ECM) framework. This involves developing a novel matrix-variate Bayesian CVAR mixture model, comprised of Gaussian errors intra-day and $\alpha$-stable errors inter-day in the ECM framework. To achieve this we derive conjugate posterior models for the Scale Mixtures of Normals (SMiN CVAR) representation of $\alpha$-stable inter-day innovations. These results are generalized to asymmetric intractable models for the innovation noise at inter-day boundaries allowing for skewed $\alpha$-stable models via Approximate Bayesian computation. Our proposed model and sampling methodology is general, incorporating the current CVAR literature on Gaussian models, whilst allowing for price series level shifts to occur either at random estimated time points or known \textit{a priori} time points. We focus analysis on regularly observed non-Gaussian level shifts that can have significant effect on estimation performance in statistical models failing to account for such level shifts, such as at the close and open times of markets. We illustrate our model and the corresponding estimation procedures we develop on both synthetic and real data. The real data analysis investigates Australian dollar, Canadian dollar, five and ten year notes (bonds) and NASDAQ price series. In two studies we demonstrate the suitability of statistically modeling the heavy tailed noise processes for inter-day price shifts via an $\alpha$-stable model. Then we fit the novel Bayesian matrix variate CVAR model developed, which incorporates a composite noise model for $\alpha$-stable and matrix variate Gaussian errors, under both symmetric and non-symmetric $\alpha$-stable assumptions.

16 citations


Cited by
More filters
Book ChapterDOI
Sune Karlsson1
TL;DR: This chapter reviews Bayesian methods for inference and forecasting with VAR models and special attention is given to the implementation of the simulation algorithm.
Abstract: This chapter reviews Bayesian methods for inference and forecasting with VAR models. Bayesian inference and, by extension, forecasting depends on numerical methods for simulating from the posterior distribution of the parameters and special attention is given to the implementation of the simulation algorithm.

169 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a survey of the growing literature on pairs trading frameworks, i.e., relative value arbitrage strategies involving two or more securities, and provide an in-depth assessment of each approach, revealing strengths and weaknesses relevant for further research.
Abstract: This survey reviews the growing literature on pairs trading frameworks, i.e., relative-value arbitrage strategies involving two or more securities. Research is categorized into five groups: The distance approach uses nonparametric distance metrics to identify pairs trading opportunities. The cointegration approach relies on formal cointegration testing to unveil stationary spread time series. The time-series approach focuses on finding optimal trading rules for mean-reverting spreads. The stochastic control approach aims at identifying optimal portfolio holdings in the legs of a pairs trade relative to other available securities. The category “other approaches” contains further relevant pairs trading frameworks with only a limited set of supporting literature. Finally, pairs trading profitability is reviewed in the light of market frictions. Drawing from a large set of research consisting of over 100 references, an in-depth assessment of each approach is performed, ultimately revealing strengths and weaknesses relevant for further research and for implementation.

140 citations

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that the spatial distribution of betweenness centrality is invariant for planar networks, that are used to model many infrastructural and biological systems.
Abstract: The betweenness centrality, a path-based global measure of flow, is a static predictor of congestion and load on networks. Here we demonstrate that its statistical distribution is invariant for planar networks, that are used to model many infrastructural and biological systems. Empirical analysis of street networks from 97 cities worldwide, along with simulations of random planar graph models, indicates the observed invariance to be a consequence of a bimodal regime consisting of an underlying tree structure for high betweenness nodes, and a low betweenness regime corresponding to loops providing local path alternatives. Furthermore, the high betweenness nodes display a non-trivial spatial clustering with increasing spatial correlation as a function of the edge-density. Our results suggest that the spatial distribution of betweenness is a more accurate discriminator than its statistics for comparing static congestion patterns and its evolution across cities as demonstrated by analyzing 200 years of street data for Paris.

105 citations

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that the distribution of betweenness centrality (BC) is an invariant quantity in most planar graphs, and they confirm this invariance through an empirical analysis of street networks from 97 of the most populous cities worldwide, at scales significantly larger than previous studies.
Abstract: We demonstrate that the distribution of betweenness centrality (BC), a global structural metric based on network flow, is an invariant quantity in most planar graphs. We confirm this invariance through an empirical analysis of street networks from 97 of the most populous cities worldwide, at scales significantly larger than previous studies. We also find that the BC distribution is robust to major alterations in the network, including significant changes to its topology and edge weight structure, indicating that the only relevant factors shaping the distribution are the number of nodes and edges as well as the constraint of planarity. Through simulations of random planar graph models and analytical calculations on Cayley trees, this invariance is demonstrated to be a consequence of a bimodal regime consisting of an underlying tree structure for high BC nodes, and a low BC regime arising from the presence of loops providing local path alternatives. Furthermore, the high BC nodes display a non-trivial spatial dependence, with increasing spatial correlation as a function of the number of edges, leading them to cluster around the barycenter at large densities. Our results suggest that the spatial distribution of the BC is a more accurate discriminator when comparing patterns across cities. Moreover, the BC being a static predictor of congestion in planar graphs, the observed invariance and spatial dependence has practical implications for infrastructural and biological networks. In particular, for the case of street networks, as long as planarity is conserved, bottlenecks continue to persist, and the effect of planned interventions to alleviate structural congestion will be limited primarily to load redistribution, a feature confirmed by analyzing 200 years of data for central Paris.

79 citations

Proceedings ArticleDOI
01 Dec 2014
TL;DR: The literature review about the Big data Mining and the issues and challenges with emphasis on the distinguished features of Big Data is presented and some methods to deal with big data are discussed.
Abstract: Data has become an indispensable part of every economy, industry, organization, business function and individual. Big Data is a term used to identify the datasets that whose size is beyond the ability of typical database software tools to store, manage and analyze. The Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper presents the literature review about the Big data Mining and the issues and challenges with emphasis on the distinguished features of Big Data. It also discusses some methods to deal with big data.

61 citations