scispace - formally typeset
Search or ask a question
Author

B. K. Tripathy

Other affiliations: Berhampur University
Bio: B. K. Tripathy is an academic researcher from VIT University. The author has contributed to research in topics: Rough set & Cluster analysis. The author has an hindex of 22, co-authored 243 publications receiving 1735 citations. Previous affiliations of B. K. Tripathy include Berhampur University.


Papers
More filters
Journal ArticleDOI
TL;DR: The concept of bag complement is redefined suitably and some theorems involving bag operations have beenestablished and many existing and new results have been established based upon this new definition.

93 citations

Book ChapterDOI
01 Jan 2016
TL;DR: This paper follows the approach of Tripathy et al. in redefining IFSS and presents an application of IFSS in decision-making which substantially improve and is more realistic than the algorithms proposed earlier by several authors.
Abstract: Molodtsov introduced soft set theory as a new mathematical approach to handle uncertainty. Hybrid models have been found to be more useful than the individual components. Following this trend fuzzy soft sets (FSS) and intuitionistic fuzzy soft sets (IFSS) were introduced. Recently, soft sets were introduced by Tripathy and Arun (Int J Reasoning-Based Intell Syst 7(3/4):244–253, 2015) [6] using the notion of characteristic function. This led to the redefinitions of concepts like complement, intersection, union of IFSS, Null and absolute IFSS. In this paper, we follow the approach of Tripathy et al. in redefining IFSS and present an application of IFSS in decision-making which substantially improve and is more realistic than the algorithms proposed earlier by several authors.

61 citations

Proceedings ArticleDOI
09 Aug 2010
TL;DR: This paper proposes a modification to Bin Zhou and Pei's algorithm for the network anonymization which can handle the situations in which an adversary has knowledge about vertices in the second or higher hops of a vertex, in addition to its immediate neighbors.
Abstract: Now a days, more and more of social network data are being published in one way or other. So, preserving privacy in publishing social network data has become an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Most of the work done so far towards privacy preservation can deal with relational data only. However, Bin Zhou and Jian Pei [11] proposed a scheme for anonymization of social networks, which is an initiative in this direction and provides a partial solution to this problem. In fact, their algorithm cannot handle the situations in which an adversary has knowledge about vertices in the second or higher hops of a vertex, in addition to its immediate neighbors. In this paper, we propose a modification to their algorithm for the network anonymization which can handle such situations. In doing so, we use an algorithm for graph isomorphism based on adjacency matrix instead of their approach using DFS technique [11]. More importantly, the time complexity of our algorithm is less than that of Zhou and Pei.

55 citations

Posted Content
TL;DR: In this paper, two processes such as pre process and post process are used to mine suitable rules and to explore the relationship among the attributes to explore better knowledge and most important factors affecting the decision making.
Abstract: Medical diagnosis process vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases them selves. Based on decision theory, in the past many mathematical models such as crisp set, probability distribution, fuzzy set, intuitionistic fuzzy set were developed to deal with complicating aspects of diagnosis. But, many such models are failed to include important aspects of the expert decisions. Therefore, an effort has been made to process inconsistencies in data being considered by Pawlak with the introduction of rough set theory. Though rough set has major advantages over the other methods, but it generates too many rules that create many difficulties while taking decisions. Therefore, it is essential to minimize the decision rules. In this paper, we use two processes such as pre process and post process to mine suitable rules and to explore the relationship among the attributes. In pre process we use rough set theory to mine suitable rules, whereas in post process we use formal concept analysis from these suitable rules to explore better knowledge and most important factors affecting the decision making.

52 citations

Proceedings ArticleDOI
01 Apr 2017
TL;DR: Three popular deep learning architectures, namely vanilla RNNs, Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), are analyzed for analyzing sentiments in sentences using pre-trained word vectors from the Google News dataset.
Abstract: Recurrent Neural Networks (RNNs) are popular deep learning architectures used in Natural Language Processing for analyzing sentiments in sentences The recurrent nature of these networks enable them to use information from previous time steps In this paper, we analyze the performance of three RNNs namely vanilla RNNs, Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) Both unidirectional and bidirectional nature of these networks are considered Pre-trained word vectors from the Google News dataset are used We evaluate the performance of these networks on the Amazon health product reviews dataset and sentiment analysis benchmark datasets SST-1 and SST-2

49 citations


Cited by
More filters
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal Article
TL;DR: The Health Insurance Portability and Accountability Act, also known as HIPAA, was designed to protect health insurance coverage for workers and their families while between jobs and establishes standards for electronic health care transactions.
Abstract: The Health Insurance Portability and Accountability Act, also known as HIPAA, was first delivered to congress in 1996 and consisted of just two Titles. It was designed to protect health insurance coverage for workers and their families while between jobs. It establishes standards for electronic health care transactions and addresses the issues of privacy and security when dealing with Protected Health Information (PHI). HIPAA is applicable only in the United States of America.

561 citations

Journal ArticleDOI
TL;DR: The abstract should not contain any undefined abbreviations or unspecified references, and work planned but not completed should not appear in the abstract.
Abstract: Please provide a short abstract of 100 to 250 words. The abstract should not contain any undefined abbreviations or unspecified references. Work planned but not completed should not appear in the abstract.

520 citations

Journal ArticleDOI
TL;DR: This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector and discusses current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region.

412 citations