Papers published on a yearly basis
Papers
More filters
••
TL;DR: This approach converts extracted iris features into barcode reduces the space for storage and the time required for searching and matching operations, which are essential features in real time applications.
Abstract: Iris recognition is one of the important authentication mechanisms; authentication needs verification of individuals for uniqueness hence converting iris data into barcode is an appropriate in authenticating individuals to identify uniqueness. Such converted barcode is unique for every iris image. In iris recognition, most applications capture the eye image; extract the iris features and stores into the database in digitized form. The size of the digitized form is equal to or little less than original iris image. This as leads to the drawbacks such as more usage of memory and more time required for searching and matching operations. To overcome these drawbacks we propose an approach wherein we convert extracted iris features into barcodes. This transformation of iris into barcode reduces the space for storage and the time required for searching and matching operations, which are essential features in real time applications.
••
01 Jan 2021TL;DR: In this paper, an approach for generating short and precise summary from a single document using weighted average of feature scores has been proposed, where sentences are ranked based on their scores, and top 40% sentences are selected to form the summary.
Abstract: In the era of information overload, need for applications to comb through huge number of documents to extract important information is increasing. This information is helpful in assessing whether or not a document is relevant. Automatic text summarization is one of the solutions to the problem of extracting useful information from huge collection of textual data. A summarizer converts a lengthy document into a short summary by extracting important sentences from it without losing the crucial information. A summarizer can be either abstractive or extractive. An extractive summarizer relies on the statistical features of the input text to create a summary by merely copying the important sentences, whereas an abstractive summarizer tries to understand the context of the document and generates a summary which may contain new sentences not part of the original document. This paper focuses on extractive summarization technique. An approach for generating short and precise summary from a single document using weighted average of feature scores has been proposed. Sentences are ranked based on their scores, and top 40% sentences are selected to form the summary. Experiments were carried out on 250 documents from BBC News summary dataset. The results were compared with existing online summarizers and the proposed summarizer gave better average recall, precision and F-measure values.
••
TL;DR: A novel approach to access the medium using CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) protocols for SCADA system employed in power system operation and control and a Moore finite state Moore machine is designed and a VHDL model is developed.
Abstract: This paper discusses a novel approach toaccess the medium using CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) protocols for SCADA ( S upervisory C ontrol A nd D ata A cquisition) system employed in power systemoperation and control. This method offers superior performance over the existing practices of accessing medium using CSMA/CD protocols. The proposal makes use of binary countdown method with modifications. The requirements of SCADA system such as reliability and real time operation can be achieved by fast data transmission and prompt delivery. In this work, a program is written and the procedure is run to allocate the channel considering the priorities of the RTUs (Remote Terminal Units) and the type of data to be transmitted such as normal data for archival and event triggered data for operation control. The program uses generic statement and is applicable to a system having a MTU (Master Terminal Unit) and any number of RTUs. The results obtained by running the procedure show its novelty. Further, a Moore finite state Moore machine is designed and a VHDL model is developed
•
TL;DR: This work proposes that the personalized search engine for information retrieval system using the client-server module for user preferred information through intelligent search and storing the searched result in a database for further accessing of information is implemented.
Abstract: Searching for relevant information becomes very difficult and sometimes we don’t find the exact information what we are actually seeking, so it results in time-consuming and repeating the same web page without knowingly. A system that will know our needs, requirements, preferences, and patterns. This will retrieve the correct information and helps in fast processing. In this work, it is proposed that the personalized search engine for information retrieval system using the client-server module for user preferred information through intelligent search and storing the searched result in a database for further accessing of information is implemented. For information retrieval, a framework known as scrappy is used for retrieving all the user needed information by specifying the URL of that data. The fetched information is stored in the database. It helps in offline browsing, full-text search in the database, and fast response and no repeating of web pages.
••
01 Jan 2021TL;DR: In this paper, a support vector machine approach with the help of OpenCV simulation tool is used for corner detection, the algorithm is straight forward with less computational complexity and it has machine learning capability which gives good results.
Abstract: Support vector machine approach in machine vision with the help of OpenCV simulation tool is used for corner detection. Path of the maximum gray-level changes meant for every edge-pixel is calculated in the picture, this edge-pixel is represented by four-dimensional feature vectors. It is made up of count of the edge pixels in window center and has four directions since their maximum gray-level direction change. This feature vector and support vector are used for designing of support vector machine. For corner detection, it represents critical points in a classification. This algorithm is straight forward with less computational complexity. It has machine learning capability which gives good results.
Authors
Showing all 350 results
Name | H-index | Papers | Citations |
---|---|---|---|
Narasimha H. Ayachit | 15 | 104 | 703 |
Arjumand A. Kittur | 14 | 17 | 807 |
S. C. Shiralashetti | 13 | 45 | 493 |
Varsha S. Joshi | 11 | 17 | 405 |
A.A. Kittur | 11 | 12 | 673 |
V.S. Yaliwal | 10 | 35 | 368 |
Umakant P. Kulkarni | 10 | 65 | 372 |
S. R. Biradar | 10 | 38 | 330 |
Suresh Chavhan | 9 | 26 | 169 |
Mrityunjaya V. Latte | 9 | 38 | 214 |
P. S. Shivakumar Gouda | 8 | 29 | 206 |
M.N. Kalasad | 8 | 9 | 212 |
Satish S. Bhairannawar | 6 | 19 | 80 |
G S Thyagaraju | 6 | 12 | 80 |
V. S. Hegde | 6 | 11 | 107 |