scispace - formally typeset
Search or ask a question
Author

Anugrah Kumar

Bio: Anugrah Kumar is an academic researcher from VIT University. The author has contributed to research in topics: Rough set & Phishing. The author has an hindex of 2, co-authored 7 publications receiving 8 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2013
TL;DR: In this article, the authors proposed an approach towards phishing detection using Rough Set Theory, which can be a powerful tool, when working on such kind of Applications containing vague or imprecise data.
Abstract: Phishing is a common online weapon, used against users, by Phishers for acquiring a confidential information through deception. Since the inception of internet, nearly everything, ranging from money transaction to sharing information, is done online in most parts of the world. This has also given rise to malicious activities such as Phishing. Detecting Phishing is an intricate process due to complexity, ambiguity and copious amount of possibilities of factors responsible for phishing . Rough sets can be a powerful tool, when working on such kind of Applications containing vague or imprecise data. This paper proposes an approach towards Phishing Detection Using Rough Set Theory. The Thirteen basic factors, directly responsible towards Phishing, are grouped into four Strata. Reliability Factor is determined on the basis of the outcome of these strata, using Rough Set Theory . Reliability Factor determines the possibility of a suspected site to be Valid or Fake. Using Rough set Theory most and the least influential factors towards Phishing are also determined.

2 citations

Posted Content
TL;DR: In this paper, the authors proposed an approach towards phishing detection using Rough Set Theory, which can be a powerful tool, when working on such kind of Applications containing vague or imprecise data.
Abstract: Phishing is a common online weapon, used against users, by Phishers for acquiring a confidential information through deception. Since the inception of internet, nearly everything, ranging from money transaction to sharing information, is done online in most parts of the world. This has also given rise to malicious activities such as Phishing. Detecting Phishing is an intricate process due to complexity, ambiguity and copious amount of possibilities of factors responsible for phishing . Rough sets can be a powerful tool, when working on such kind of Applications containing vague or imprecise data. This paper proposes an approach towards Phishing Detection Using Rough Set Theory. The Thirteen basic factors, directly responsible towards Phishing, are grouped into four Strata. Reliability Factor is determined on the basis of the outcome of these strata, using Rough Set Theory . Reliability Factor determines the possibility of a suspected site to be Valid or Fake. Using Rough set Theory most and the least influential factors towards Phishing are also determined.

2 citations

Proceedings ArticleDOI
15 Apr 2013
TL;DR: Rough Set theory is used to trim down the massive data of factors of vulnerability of a network for a successful intrusion or attack, based on Rough set theory.
Abstract: Vulnerability of a Network (can be an office LAN or computer systems connected together for secure data communication) is defined as susceptibility of a network for a successful intrusion or attack. Network's vulnerability depends on some specific factors. In this paper we have used Rough Set theory to trim down the massive data of factors. The paper considers various possibilities with different possible combinations of attack factors and deducting rules, based on Rough set theory. Thus, determining Vulnerability factor of the Network; describing proneness of the network for a successful attack. Albeit the paper considers only few factors, there can be enormous number of factors.

1 citations

Journal ArticleDOI
TL;DR: A new content based Image search Algorithm using Rough Set and Relational Graph is proposed which would enhance the interaction of human to software and also is efficient.
Abstract: Until now, the search carried out in devices using the conventional, textual form of search now seems tedious, considering the advances made in human software interaction since the emergence and development in touch-screen interfaces. In the past decade finger-touch or multi-touch interfaces have completely altered the way we interact with the devices. The input method implemented in search mechanisms, use the textual form of string input method that can be extended to graphical search which would enhance the interaction of human to software and also is efficient. This Paper proposes a new content based Image search Algorithm using Rough Set and Relational Graph.

1 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: In this paper, an approach using artificial intelligence technique with the help of rough set theory is proposed which basically lessens the number of gates in the circuit, based on decision rules.
Abstract: High-speed, accuracy, meticulousness and quick responses are the notion of the vital necessities for modern digital world. An efficient electronic circuit unswervingly affects the maneuver of the whole system. Different tools are required to unravel different types of engineering tribulations. Improving the efficiency, accuracy and low power consumption in an electronic circuit is always been a bottle neck problem. So the need of circuit miniaturization is always there. It saves a lot of time and power while switching of gates and reduces the wiring-crises. Therefore to trounce with this problem we have proposed an artificial intelligence (AI) based approach that makes use of Rough Set Theory for its implementation. Theory of rough set has been proposed by Z Pawlak in the year 1982. Rough set theory is a new mathematical tool which deals with uncertainty and vagueness. Decisions can be generated using rough set theory by reducing the unwanted and superfluous data. We have condensed the number of gates without upsetting the productivity of the given circuit. This paper proposes an approach using artificial intelligence technique with the help of rough set theory which basically lessens the number of gates in the circuit, based on decision rules.

1 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 2020

16 citations

Posted Content
TL;DR: This paper applied knowledge discovery principles from data cleansing, integration, selection, aggregation, and data mining to knowledge extraction and compared six machine-learning approaches to detect phishing based on a small number of carefully chosen features.
Abstract: Phishing emails are the first step for many of today's attacks. They come with a simple hyperlink, request for action or a full replica of an existing service or website. The goal is generally to trick the user to voluntarily give away his sensitive information such as login credentials. Many approaches and applications have been proposed and developed to catch and filter phishing emails. However, the problem still lacks a complete and comprehensive solution. In this paper, we apply knowledge discovery principles from data cleansing, integration, selection, aggregation, data mining to knowledge extraction. We study the feature effectiveness based on Information Gain and contribute two new features to the literature. We compare six machine-learning approaches to detect phishing based on a small number of carefully chosen features. We calculate false positives, false negatives, mean absolute error, recall, precision and F-measure and achieve very low false positive and negative rates. Na{\"i}ve Bayes has the least true positives rate and overall Neural Networks holds the most promise for accurate phishing detection with accuracy of 99.4\%.

5 citations

Patent
07 Nov 2016
TL;DR: In this article, a technique to reassign one or more stored elements of web application client state information is provided in an HTTP-based client upon receipt of an HTTP redirect in response to a request-URI.
Abstract: A technique to reassign one or more stored elements of web application client state information is provided in an HTTP-based client upon receipt of an HTTP redirect in response to a request-URI. One or more stored elements associated to the request-URI are saved in or in association with the client. Upon receipt of an HTTP 301 (permanent) redirect, the client automatically reassigns (re-associates) the one or more stored elements to the redirect domain when the redirect can be verified as authentic (e.g., to originate from the application to which the client is attempting to connect).

4 citations

Proceedings ArticleDOI
05 Oct 2020
TL;DR: Six machine-learning approaches to detect phishing based on a small number of carefully chosen features are compared and Naive Bayes has the least true positives rate and overall Neural Networks holds the most promise for accurate phishing detection with accuracy of 99.4%.
Abstract: Phishing emails are the first step for many of today’s attacks. They come with a simple hyperlink, request for action or a full replica of an existing service or website. The goal is generally to trick the user to voluntarily give away his sensitive information such as login credentials. Many approaches and applications have been proposed and developed to catch and filter phishing emails. However, the problem still lacks a complete and comprehensive solution. In this paper, we apply knowledge discovery principles from data cleansing, integration, selection, aggregation, data mining to knowledge extraction. We study the feature effectiveness based on Information Gain and contribute two new features to the literature. We compare six machine-learning approaches to detect phishing based on a small number of carefully chosen features. We calculate false positives, false negatives, mean absolute error, recall, precision and F-measure and achieve very low false positive and negative rates. Naive Bayes has the least true positives rate and overall Neural Networks holds the most promise for accurate phishing detection with accuracy of 99.4%.

4 citations

Journal ArticleDOI
TL;DR: The postulation of cholesterol binding motif (CRAC/CARC), its presence in different proteins and validating its interaction with cholesterol has indeed established the importance of the motif in cholesterol-mediated modulation of protein/signaling pathway.
Abstract: Objectives: The current study is focused on design of a computational model for human ABC transporters; wherein the TM-sequences matching the CRAC/CARC motif are extracted. Methods: The postulation of cholesterol binding motif (CRAC/CARC), its presence in different proteins and validating its interaction with cholesterol has indeed established the importance of the motif in cholesterol-mediated modulation of protein/signaling pathway. Several viral proteins and membrane proteins (especially alpha-helical trans membrane proteins) such as GPCR transporters are reported to be modulated by cholesterol. The experimental studies are so far performed on only a few proteins in a family but based on an evolutionary conservation and consensus an exploration can be done confidently within a family. However, the representation of motif has a low consensus yielding several false positives thus reducing its reliability. Findings: A computational hybrid clustering method based on rough set with fuzzy c-means algorithm is used to mine the cholesterol sequence from ABC family. Higher weightage is given to those sequences based on the following parameters: motifs with more number of sub motifs, number of helices bearing the motif in a protein and compliance with the orientation of the cholesterol in the membrane for its interaction with the motif. Improvement: A detailed study in a given super family with an approach to reduce redundancy and enrichment can improve its predictability.

3 citations