scispace - formally typeset
Search or ask a question

Showing papers by "SDM College of Engineering and Technology published in 2013"


Journal ArticleDOI
TL;DR: In this paper, chitosan-wrapped multiwalled carbon nanotubes (CS- wrapped MWCNTs) incorporated SA membranes were prepared using a solution technique, and they were subjected to pervaporation dehydration of isopropanol.

85 citations


Journal ArticleDOI
TL;DR: In this paper, Tetraethylorthosilicate incorporated hybrid poly(vinyl alcohol) membranes were grafted with glycidyltrimethylammonium chloride (GTMAC) in different mass%.

39 citations


Proceedings ArticleDOI
24 Sep 2013
TL;DR: EyeK, a gaze-based text entry system which diminishes dwell time and favors to mitigate visual search time is proposed, which can effortlessly be suited in medium-sized display devices like Tablet PC, PDA etc.
Abstract: Over the last three decades, eye gaze has become an important modality of text entry in large and small display digital devices covering people with disabilities beside the able-bodied. Despite of many tools being developed, issues like minimizing dwell time, visual search time and interface area, eye-controlled mouse movement stability etc. are still points of concern in making any gaze typing interface more user friendly, accurate and robust. In this paper, we propose EyeK, a gaze-based text entry system which diminishes dwell time and favors to mitigate visual search time. Performance evaluation shows that proposed interface achieves on an average 15% higher text entry rate over the existing interfaces. As designed, the proposed interface can effortlessly be suited in medium-sized display devices like Tablet PC, PDA etc. Also, the developed system can be used by the people with motor disabilities.

39 citations


Journal ArticleDOI
TL;DR: Chitosan based hybrid membranes were prepared by incorporating 2-(3,4-epoxycyclohexyl) ethyltrimethoxysilane (ETMS) into chitosano matrix using a sol-gel technique as mentioned in this paper.

30 citations


Journal ArticleDOI
TL;DR: In this article, a top-to-bottom approach has been adopted to prepare silver nanoparticles by electrochemical dissolution of metal in suitable organic solvents, which permits in situ capping of nanoparticles with organic molecules.

27 citations


Journal ArticleDOI
TL;DR: In this paper, polymeric membranes composed of sodium alginate and polystyrene sulfonic acid-co -maleic acid were used to separate water-dioxan mixtures in the temperature range of 30-50°C.

23 citations


Book ChapterDOI
18 Jan 2013
TL;DR: Experimental results show that the algorithm can achieve a 80% performance for drowsiness detection under varying lighting conditions and processing is done only on one of the eye to analyze attributes of eyelid movement in drowsness, thus increasing the speed and reducing the false detection.
Abstract: Use of technology in building human comforts and automation is growing fast, particularly in automobile industry. Safety of human being is the major concern in vehicle automation. Statistics shows that 20% of all the traffic accidents are due to diminished vigilance level of driver and hence use of technology in detecting drowsiness and alerting driver is of prime importance. In this paper, method for detection of drowsiness based on multidimensional facial features like eyelid movements and yawning is proposed. The geometrical features of mouth and eyelid movement are processed, in parallel to detect drowsiness. Harr classifiers are used to detect eyes and mouth region. Only the position of lower lip is selected to check for drowsiness as during yawn only lower lip is moved due to downward movement of lower jaw and position of the upper lip is fixed. Processing is done only on one of the eye to analyze attributes of eyelid movement in drowsiness, thus increasing the speed and reducing the false detection. Experimental results show that the algorithm can achieve a 80% performance for drowsiness detection under varying lighting conditions.

20 citations


Journal ArticleDOI
TL;DR: In this paper, the authors have proposed an improved model of a PV module that makes use only parameters provided by manufacturer's datasheets without requiring the use of any numerical methods.
Abstract: The cost and performance of PV plants goes with the module under consideration. However the electrical parameters of the modules vary from those provided by the manufacturer with the ageing of the module. Therefore, the behavior of the mathematical model of a PV module cannot match the real operating conditions. There are papers having proposed an improved model of a PV module that makes use only parameters provided by manufacturer's datasheets without requiring the use of any numerical methods. This paper aims to interpret by using one of the available models represented as Norton's equivalent circuit which is a simple and approximate model. Norton's circuit model representing the module helps knowing the behavior of solar module. The study and performance of this model is compared with that of the existing model using Matlab and the results are matching to the greater extent. The results obtained for both the models are validated by conducting series of experiments on a physical module. Thus, the work signifies yet another model for a PV module to evaluate the performance parameters

15 citations


Proceedings ArticleDOI
04 Jul 2013
TL;DR: This paper proposes an optimized parallel architecture of AES algorithm for disk encryption, suitable to be implemented in a multicore environment and exhibits improved performance over the sequential approach.
Abstract: Computers have become more prevalent and their interconnection via networks has increased the dependence of both organizations and individuals on the information stored and for communication using these systems. The end-user needs a faster, more capable system to keep up with this trend. At the same time security of data stored electronically is equally important. Disk encryption is a special case of data at rest protection when the storage media is a sector-addressable device. Advanced Encryption Standard (AES) is a symmetric key block cipher that gives maximum security because of longer key length, complex mathematical calculations, permutations and substitutions. Because of its complexity the execution time for the process of encryption is large. However with the advent of parallel computing and multicore processors there is a scope for parallelization of AES algorithm both at data and control level This paper proposes an optimized parallel architecture of AES algorithm for disk encryption, suitable to be implemented in a multicore environment. Cipher Block Chaining (CBC) mode of encryption is used for implementing the disk encryption. As it does not support a parallel architecture, Interleaved Cipher Block Chaining (ICBC) mode (proposed by the cryptographic community that allows parallel implementation) has been implemented. The AES algorithm in CBC and ICBC modes has been implemented in C language and is parallelized using OpenMP API 3.1 standard. The performance analysis is done using Intel VTune™ Amplifier XE 2013. The parallel design (ICBC) exhibits improved performance over the sequential approach (CBC) and a speed up of approximately 1.7 is achieved.

15 citations


Proceedings ArticleDOI
21 Oct 2013
TL;DR: The use of CBIR techniques for automatic classification of archaeological monuments using visual features shape and texture to study the art form and retrieve the similar images from reference collection is illustrated.
Abstract: Until now, Content Based Image Retrieval (CBIR) techniques barely contributed to the archaeological domain. The use of these techniques can support archaeologists in their assessment and classification of archaeological finds. Museums and art galleries deal in inherently visual objects. The ability to identify objects sharing some aspect of visual similarity can be useful both to researchers trying to trace historical influences, and to art lovers looking for further examples of paintings or sculptures appealing to their taste. This paper illustrates the use of CBIR techniques for automatic classification of archaeological monuments using visual features shape and texture to study the art form and retrieve the similar images from reference collection. Shape based features are extracted using morphological operators and texture features are extracted using gray level co-occurrence matrix (GLCM). Robust feature set is built to retrieve the similar images. Experiments have been conducted on database consists of 500 images with 5 categories. Results of proposed method are compared with Canny and Sobel methods. Results demonstrate the efficiency of proposed method.

14 citations


Proceedings ArticleDOI
15 Oct 2013
TL;DR: In this paper, a study carried out with crumb rubber tire used in strengthening the subgrade was carried out on black cotton soil and the results showed the range of values from 1.16 to 1.54.
Abstract: The management of scrap tires has become a growing problem in recent years. Scrap tires represent one of several special wastes that are difficult for municipalities to handle. Whole tires are difficult to landfill because they tend to float to the surface. Stockpiles of scrap tires are located in many communities, resulting in public health, environmental, and aesthetic problems. The paper presents the study carried out with crumb rubber tyre used in strengthening the subgrade. The Standard Proctor test carried out on black cotton soil showed moisture content and dry density as 18% and 2.43 respectively The CBR test has shown the range of values from 1.16 to 1.54. Therefore it is suggested that the waste crumbed tyres can be safely used in the sub grade as a soil binder which will effectively hold the soil with increased strength values.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper provides a framework for decision fusion to select robust horizon estimate out of `n' estimates, based on confidence factor, and proposes to combine the evidence parameters to generate confidence factor using DSCR to justify the correctness of the estimated horizon.
Abstract: In this paper, we address the problem of decision fusion for robust horizon estimation using Dempster Shafer Combination Rule (DSCR). We provide a framework for decision fusion to select robust horizon estimate out of `n' estimates, based on confidence factor. Vision-based attitude estimation depends on robust horizon estimation and no single algorithm gives accurate results for different kind of scenarios. We propose to combine the evidence parameters to generate confidence factor using DSCR to justify the correctness of the estimated horizon. We compute Confidence Interval (CI) based on Gaussian Mixture Model (GMM). We also propose two techniques to provide evidence parameters for the estimated horizon using CI. We demonstrate the effectiveness of the decision framework on clear and noisy data sets of simulated and real images/videos captured by Micro Air Vehicle (MAV).

Proceedings ArticleDOI
24 Jul 2013
TL;DR: This project employs two schemes of coding transform coefficients namely exponential Golomb coding & context adaptive variable length coding (CAVLC) and the major part of the contribution is the decoding strategy applied for decoding which results in performance enhancement saving the memory and decoding time which are the most important factors for bandwidth utilization.
Abstract: As the costs for both processing power and memory have reduced, network support for coded video data has diversified, and advances in video coding technology have progressed, the need has arisen for an industry standard for compressed video representation with substantially increased coding efficiency and enhanced robustness to network environments. The H.264/AVC standard aims to enable significantly improved compression performance compared to all existing video coding standards. In this project we employ two schemes of coding transform coefficients namely exponential Golomb coding & context adaptive variable length coding (CAVLC).And the major part of the contribution is the decoding strategy applied for decoding which results in performance enhancement saving the memory and decoding time which are the most important factors for bandwidth utilization. The transform coefficients are obtained using a simple zigzag scan technique. The consensus among the major players of the communications and video industry on H.264 might provide the major thrust for this new standard.

Proceedings ArticleDOI
15 Dec 2013
TL;DR: A fuzzy based self-tuning approach has been proposed wherein, three inputs namely, Buffer-Hit-Ratio, Number of Users and Database size are extracted from the Database management system as sensor inputs that indicate degradation in performance and key tuning parameters called the effectors are altered according to the fuzzy-rules.
Abstract: Self-tuning of Database Management Systems(DBMS) offers important advantages such as improved performance, reduced Total Cost of Ownership(TCO), eliminating the need for an exert Database Administrator(DBA) and improved business prospects. Several techniques have been proposed by researchers and the database vendors to self-tune the DBMS. However, the research focus was confined to physical tuning techniques and the algorithms used in existing methods for self-tuning of memory need analysis of large statistical data. As result, these approaches are not only computationally expensive but also do not adapt well to highly unpredictable workload types and user-load patterns. Hence, in this paper a fuzzy based self-tuning approach has been proposed wherein, three inputs namely, Buffer-Hit-Ratio, Number of Users and Database size are extracted from the Database management system as sensor inputs that indicate degradation in performance and key tuning parameters called the effectors are altered according to the fuzzy-rules. The fuzzy rules are framed after a detailed study of impact of each tuning parameter on the response-time of user queries. The proposed self-tuning architecture is based on Monitor, Analyze, Plan and Execute(MAPE) feedback control loop framework [1] and has been tested under various workload types. The results have been validated by comparing the performance of the proposed self-tuning system with the auto-tuning feature of commercial database systems. The results show significant improvement in performance under various workload-types, user-load variations.

Proceedings ArticleDOI
24 Sep 2013
TL;DR: A gaze-based text entry system EyeBoard++ for Hindi, national language of India is proposed which minimizes dwell time by introducing word completion and word prediction methodologies side by side mitigates visual search time by highlighting next probable characters.
Abstract: Of late, eye gaze has become an important modality of text entry in large and small display digital devices. Despite many tools being developed, issues like minimizing dwell time and visual search time, enhancing accuracy of composed text, eye-controlled mouse movement stability etc. are yet to be addressed. Moreover, eye typing interfaces having a large number of keys suffer from many problems like selecting wrong characters, more character searching time etc. Some linguistic issues often decline in minimizing dwell time incurred for character by character based eye typing task. The aforementioned issues are prominently evolved in case of Indian languages for its many language related issues. In this paper, we propose a gaze-based text entry system EyeBoard++ for Hindi, national language of India which minimizes dwell time by introducing word completion and word prediction methodologies side by side mitigates visual search time by highlighting next probable characters. Performance evaluation shows that proposed interface achieves text entry rate on an average 9.63 words per minute. As designed, the proposed interface can effortlessly be suited in medium-sized display devices like Tablet PC, PDA etc. The proposed interface design approach, in fact, provides a solution to deal with complexity in Indian languages and can be extended to many other languages in the world. Also, the developed system can be used by the people with motor disabilities.

16 Apr 2013
TL;DR: A method for detecting driver’s drowsiness and subsequently alerting is proposed based on eye shape measurement based on the Local Successive Mean Quantization Transform features and split up snow classifier algorithm.
Abstract: Drowsiness of drivers is one of the main causes of road accidents. Thus countermeasure systems is required to prevent sleepiness related accidents. In this paper, a method for detecting driver’s drowsiness and subsequently alerting is proposed based on eye shape measurement. The Local Successive Mean Quantization Transform (SMQT) features and split up snow classifier algorithm is used detect driver’s face and shape measurement algorithm is used to find the eye blink. Different criterions such as duration of eyelid closure, number of groups of continuous blinks and frequency of eye blink are used to find drowsiness of the driver. Experimental results show that this new algorithm achieves a satisfied performance for drowsiness detection

Book ChapterDOI
18 Jan 2013
TL;DR: This work proposed method which extracts the iris data by separating the iri region from the pupil and sclera, using roipoly and inverse roipolyn, and matching with the stored repository by using reduced pixel block algorithm.
Abstract: Iris recognition is one of the important authentication mechanism used in many of the applications. Most of the applications capture the eye image, extracts the iris features and stores in the database in digitized form. The existing methods use normalization process in the iris recognition this is comparatively takes more time to overcome this drawback we proposed method which extracts the iris data by separating the iris region from the pupil and sclera, using roipoly and inverse roipoly. Extracted features are matched with the stored repository by using reduced pixel block algorithm. The experiment is carried out on CASIA-IrisV3-Interval and result shows an improvement in feature extraction by 45% and matching by 91% compared to the existing method.

Journal ArticleDOI
TL;DR: A novel technique that combines learning ability of the artificial neural network and the able of the fuzzy system to deal with imprecise inputs are employed to estimate the extent of tuning required and results show significant performance improvement as compared to built in self tuning feature of the DBMS.
Abstract: A recent trend in database performance tuning is towards self tuning for some of the important benefits like efficient use of resources, improved performance and low cost of ownership that the auto-tuning offers. Most modern database management systems (DBMS) have introduced several dynamically tunable parameters that enable the implementation of self tuning systems. An appropriate mix of various tuning parameters results in significant performance enhancement either in terms of response time of the queries or the overall throughput. The choice and extent of tuning of the available tuning parameters must be based on the impact of these parameters on the performance and also on the amount and type of workload the DBMS is subjected to. The tedious task of manual tuning and also non-availability of expert database administrators (DBAs), it is desirable to have a self tuning database system that not only relieves the DBA of the tedious task of manual tuning, but it also eliminates the need for an expert DBA. Thus, it reduces the total cost of ownership of the entire software system. A self tuning system also adapts well to the dynamic workload changes and also user loads during peak hours ensuring acceptable application response times. In this paper, a novel technique that combines learning ability of the artificial neural network and the ability of the fuzzy system to deal with imprecise inputs are employed to estimate the extent of tuning required. Furthermore, the estimated values are moderated based on knowledgebase built using experimental findings. The experimental results show significant performance improvement as compared to built in self tuning feature of the DBMS.

Proceedings ArticleDOI
07 Nov 2013
TL;DR: In this article, the authors pointed out that the exponential growth in technical education has not translated into any significant growth in the number of quality graduates acceptable to industry, due to insufficient availability of qualified faculty, teaching methodology, evaluation techniques and processes.
Abstract: Engineering education has become a main attraction, contributing to the global industry revolution and in particular to Indian economy. The exponential growth in technical education has, however, not translated into any significant growth in the number of quality graduates acceptable to industry, due to insufficient availability of qualified faculty, teaching methodology, evaluation techniques and processes. Increasing autonomous institutions projecting their institution is another mask to underlying quality of education and evaluation procedures, which is supposed to be measured on the common scale and platform. The heterogeneity in curriculum, varied infrastructure and quality of faculty in-terms of competency and ability to make difference in learning process towards making students ready for industry and higher education and research, is posing yet another problem for the accreditation bodies. Today accreditation bodies lack in-terms of: equity in evaluation rigor, quick response, flexibility and ease of operation and access, prevention of academic fraud and participation of universities.


Journal ArticleDOI
23 Oct 2013
TL;DR: A GUI based Prototype for user centered environment like class room, library hall, laboratory, meeting hall, coffee shop, kitchen, living room and bedroom, which recommends useful services based on the user’s context and whenever the conflict arises among different users it will be resolved using some conflict resolving algorithms.
Abstract: In this paper we are proposing a GUI based Prototype for user centered environment like class room, library hall, laboratory, meeting hall, coffee shop, kitchen, living room and bedroom, which recommends useful services based on the user’s context. Service recommendation is mainly based on parameters such as user, location, time, day and mood. In addition whenever the conflict arises among different users it will be resolved using some conflict resolving algorithms. The motivation behind the proposed work is to improve the user satisfaction level and to improve the social relationship between user and devices The prototype contains simulated sensors which are used to capture the raw context information, which is then described with meaningful English sentence and services are recommended based on user’s situation. The proposed conflict resolving algorithms are Rule based algorithm, Bayesian probability based algorithm and Rough set theory based algorithm. The amount of conflicts resolved by these algorithms is also analyzed at the end.

Proceedings ArticleDOI
04 Jul 2013
TL;DR: In this paper, the behavioral characteristics of the microstrip patch antennas with different dielectric substrates such as Glass epoxy, FR-4 and RT-Duroid were studied.
Abstract: To Study the behavioral characteristics of the microstrip patch antennas with different dielectric substrates such as Glass epoxy, FR-4 and RT-Duroid. The paper gives the comparative study of Rectangular Microstrip Patch Antenna's of different dielectric substrates for design parameters like Return loss, Bandwidth, Gain, Directivity and Efficiency at 2.5GHz. Different dielectric materials have different relative permittivity and this varies the radiating capability of antennas.

Book ChapterDOI
01 Jan 2013
TL;DR: Shape Based Image Retrieval (SBIR) is proposed to retrieve shape features extracted using gradient operators and Block Truncation Coding (BTC), which improves the edge maps obtained using gradient masks like Robert, Sobel, Prewitt and Canny.
Abstract: The need of Content Based Image Retrieval (CBIR) arises because of digital era. It is very much required in the field of radiology to find the similar diagnostic images, in advertising to find the relevant stock, for cataloging in the field of geology, art and fashion. In CBIR, the set of image database is stored in terms of features where feature of an image can be calculated based on different criteria like shape, color, texture and spatial locations etc. Among three features shape is the prominent feature and helps to identify the image correctly. In this paper, we are proposing Shape Based Image Retrieval (SBIR) to retrieve shape features extracted using gradient operators and Block Truncation Coding (BTC). BTC improves the edge maps obtained using gradient masks like Robert, Sobel, Prewitt and Canny. The proposed image retrieval techniques are tested on generic image database with 1000 images spread across 10 categories. The average precision and recall of all queries are computed and considered for performance analysis. Among all the considered gradient operators for shape extraction “shape mask with BTC CBIR techniques” give better results. The performance ranking of the masks for proposed image retrieval methods can be listed as Canny (best performance), Prewitt, Sobel and lastly the Robert.

Journal ArticleDOI
TL;DR: This approach converts extracted iris features into barcode reduces the space for storage and the time required for searching and matching operations, which are essential features in real time applications.
Abstract: Iris recognition is one of the important authentication mechanisms; authentication needs verification of individuals for uniqueness hence converting iris data into barcode is an appropriate in authenticating individuals to identify uniqueness. Such converted barcode is unique for every iris image. In iris recognition, most applications capture the eye image; extract the iris features and stores into the database in digitized form. The size of the digitized form is equal to or little less than original iris image. This as leads to the drawbacks such as more usage of memory and more time required for searching and matching operations. To overcome these drawbacks we propose an approach wherein we convert extracted iris features into barcodes. This transformation of iris into barcode reduces the space for storage and the time required for searching and matching operations, which are essential features in real time applications.