Institution

# Indian Statistical Institute

Education•Kolkata, India•

About: Indian Statistical Institute is a education organization based out in Kolkata, India. It is known for research contribution in the topics: Population & Cluster analysis. The organization has 3475 authors who have published 14247 publications receiving 243080 citations. The organization is also known as: ISI & ISI Calcutta.

Topics: Population, Cluster analysis, Estimator, Fuzzy logic, Feature extraction

##### Papers published on a yearly basis

##### Papers

More filters

••

[...]

TL;DR: Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches, which addresses the issue of quantitative evaluation of segmentation results.

Abstract: Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results.

3,386 citations

••

[...]

TL;DR: The Environmental Kuznets Curve (EKC) hypothesis as discussed by the authors proposes an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases.

Abstract: The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (i) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (ii) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique.

2,378 citations

••

[...]

TL;DR: The earliest method of estimation of statistical parameters is the method of least squares due to Mark off as discussed by the authors, where a set of observations whose expectations are linear functions of a number of unknown parameters being given, the problem which Markoff posed for solution is to find out a linear function of observations, whose expectation is an assigned linear function for the unknown parameters and whose variance is a minimum.

Abstract: The earliest method of estimation of statistical parameters is the method of least squares due to Mark off. A set of observations whose expectations are linear functions of a number of unknown parameters being given, the problem which Markoff posed for solution is to find out a linear function of observations whose expectation is an assigned linear function of the unknown parameters and whose variance is a minimum. There is no assumption about the distribution of the observations except that each has a finite variance.

1,721 citations

••

[...]

TL;DR: An unsupervised feature selection algorithm suitable for data sets, large in both dimension and size, based on measuring similarity between features whereby redundancy therein is removed, which does not need any search and is fast.

Abstract: In this article, we describe an unsupervised feature selection algorithm suitable for data sets, large in both dimension and size. The method is based on measuring similarity between features whereby redundancy therein is removed. This does not need any search and, therefore, is fast. A new feature similarity measure, called maximum information compression index, is introduced. The algorithm is generic in nature and has the capability of multiscale representation of data sets. The superiority of the algorithm, in terms of speed and performance, is established extensively over various real-life data sets of different sizes and dimensions. It is also demonstrated how redundancy and information loss in feature selection can be quantified with an entropy measure.

1,332 citations

••

[...]

TL;DR: The superiority of the GA-clustering algorithm over the commonly used K-means algorithm is extensively demonstrated for four artificial and three real-life data sets.

Abstract: A genetic algorithm-based clustering technique, called GA-clustering, is proposed in this article. The searching capability of genetic algorithms is exploited in order to search for appropriate cluster centres in the feature space such that a similarity metric of the resulting clusters is optimized. The chromosomes, which are represented as strings of real numbers, encode the centres of a fixed number of clusters. The superiority of the GA-clustering algorithm over the commonly used K-means algorithm is extensively demonstrated for four artificial and three real-life data sets.

1,291 citations

##### Authors

Showing all 3475 results

Name | H-index | Papers | Citations |
---|---|---|---|

Suvadeep Bose | 154 | 960 | 129071 |

Aravinda Chakravarti | 120 | 451 | 99632 |

Martin Ravallion | 115 | 570 | 55380 |

Soma Mukherjee | 95 | 266 | 59549 |

Jagdish N. Bhagwati | 81 | 368 | 27038 |

Sankar K. Pal | 70 | 446 | 23727 |

Dabeeru C. Rao | 69 | 330 | 23214 |

Jiju Antony | 68 | 411 | 17290 |

Swagatam Das | 64 | 370 | 19153 |

Suman Banerjee | 58 | 266 | 14295 |

Nikhil R. Pal | 55 | 266 | 18481 |

Debraj Ray | 55 | 210 | 13663 |

Kaushik Basu | 54 | 323 | 13030 |

Dipankar Chakraborti | 54 | 115 | 12078 |

Abhik Ghosh | 54 | 420 | 10555 |