Institution
National Institute of Technology, Kurukshetra
Education•Kurukshetra, Haryana, India•
About: National Institute of Technology, Kurukshetra is a education organization based out in Kurukshetra, Haryana, India. It is known for research contribution in the topics: Control theory & Cloud computing. The organization has 2449 authors who have published 5174 publications receiving 53995 citations. The organization is also known as: NIT Kurukshetra.
Topics: Control theory, Cloud computing, Computer science, Electric power system, Photovoltaic system
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A survey of IoT and Cloud Computing with a focus on the security issues of both technologies is presented, and it shows how the Cloud Computing technology improves the function of the IoT.
894 citations
••
TL;DR: Support vector machines (SVM) are attractive for the classification of remotely sensed data with some claims that the method is insensitive to the dimensionality of the data and, therefore, does not require a dimensionality-reduction analysis in preprocessing, but it is shown that the accuracy of a classification by an SVM does vary as a function of the number of features used.
Abstract: Support vector machines (SVM) are attractive for the classification of remotely sensed data with some claims that the method is insensitive to the dimensionality of the data and, therefore, does not require a dimensionality-reduction analysis in preprocessing. Here, a series of classification analyses with two hyperspectral sensor data sets reveals that the accuracy of a classification by an SVM does vary as a function of the number of features used. Critically, it is shown that the accuracy of a classification may decline significantly (at 0.05 level of statistical significance) with the addition of features, particularly if a small training sample is used. This highlights a dependence of the accuracy of classification by an SVM on the dimensionality of the data and, therefore, the potential value of undertaking a feature-selection analysis prior to classification. Additionally, it is demonstrated that, even when a large training sample is available, feature selection may still be useful. For example, the accuracy derived from the use of a small number of features may be noninferior (at 0.05 level of significance) to that derived from the use of a larger feature set providing potential advantages in relation to issues such as data storage and computational processing costs. Feature selection may, therefore, be a valuable analysis to include in preprocessing operations for classification by an SVM.
708 citations
••
TL;DR: The main methodologies used in electricity price forecasting have been reviewed in this paper and classification of various price-influencing factors used by different researchers has been done and put for reference.
492 citations
••
TL;DR: The proposed solutions for collecting and managing sensors’ data in a smart building could lead us in an energy efficient smart building, and thus in a Green Smart Building.
460 citations
••
21 Jul 2003TL;DR: Results obtained by random forests classifier, another technique of generating ensemble of classifiers and their performance is compared with the ensemble of decision tree classifiers, which suggests that bagging perform well in comparison with boosting in case of noise in training data.
Abstract: In recent years, a number of works reported the use of combination of multiple classifiers to produce a single classification and demonstrated significant performance improvement. The resulting classifier, referred to as an ensemble classifier, is a set of classifiers whose individual decisions are combined by weighted or unweighted voting to classify new examples. An ensembles are often more accurate than the individual classifiers that makes them up. In remote sensing Giacinto and Roli, 1997, Roli et al., 1997 report the use of ensemble of neural networks and the integration of classification results of different type of classifiers. Studies by growing an ensemble of decision trees and allowing them to vote for the most popular class reported a significant improvement in classification accuracy for land cover classification. This paper presents results obtained by random forests classifier, another technique of generating ensemble of classifiers and their performance is compared with the ensemble of decision tree classifiers. A classification accuracy of 88.32% is achieved by random forest classifier in comparison with 87.38% and 87.28% by decision tree ensemble created using boosting and bagging techniques. Further, study also suggests that bagging perform well in comparison with boosting in case of noise in training data.
437 citations
Authors
Showing all 2503 results
Name | H-index | Papers | Citations |
---|---|---|---|
Praveen Kumar | 88 | 1339 | 35718 |
Santosh Kumar | 80 | 1196 | 29391 |
Ashwani Kumar | 66 | 703 | 18099 |
Amit Singh | 57 | 640 | 13795 |
Brij B. Gupta | 51 | 368 | 9332 |
Rajiv Kumar | 51 | 561 | 15404 |
Sunil Luthra | 45 | 162 | 6485 |
Pramod Kumar | 39 | 170 | 4248 |
Abid Haleem | 39 | 304 | 7178 |
Amit Mishra | 38 | 401 | 5735 |
Mahesh Pal | 36 | 105 | 7081 |
Ashutosh Kumar Singh | 35 | 397 | 9381 |
Vikas Mittal | 34 | 310 | 5182 |
Jitendra Kumar | 32 | 127 | 3359 |
Suresh Kumar | 29 | 407 | 3580 |