scispace - formally typeset
Search or ask a question

Showing papers by "Heritage Institute of Technology published in 2015"


Journal ArticleDOI
TL;DR: This paper has investigated multi-item integrated production-inventory models of supplier and retailer with a constant rate of deterioration under stock dependent demand and formulated deterministic optimization models for minimizing the entire monetary value of the supply chain.

66 citations


Journal ArticleDOI
TL;DR: Based on plots and variance analysis, the optimum operational conditions for maximizing lactose hydrolysis were found to be temperature, pH, and enzyme concentration in free mode and sodium alginate concentration, calcium chloride concentration, and enzymes concentration in immobilized mode.

50 citations


Journal ArticleDOI
TL;DR: In this paper, an electronic tongue with an array of five noble metal working electrodes has been developed for detecting the optimum fermentation time of crush tear curl black tea with a voltammetric electronic tongue.
Abstract: This paper presents a new methodology to monitor the fermentation process and detect the optimum fermentation time of crush tear curl black tea with a voltammetric electronic tongue. An electronic tongue with an array of five noble metal working electrodes has been developed for this purpose. A suitable large amplitude pulse voltammetric waveform has been employed for probing the chemical changes in tea samples under fermentation. Good correlation between the electronic tongue responses and the biochemical changes has been obtained by principal component analysis (PCA) during various stages of fermentation. The electronic tongue fermentation profile has been derived from PCA analysis and it is observed that such a profile enables detection of optimum fermentation times. Finally, a model based on partial least squares regression technique has been developed for real-time indication of fermentation level. The optimum fermentation times observed from the electronic tongue fermentation profiles derived from PCA and partial least squares regression correlate by a factor of 0.97 and 0.96, respectively, with the reference values obtained from an ultraviolet-visible spectrophotometer-based instrumental analysis.

45 citations


Journal ArticleDOI
01 Sep 2015-Opsearch
TL;DR: This paper has developed a generalized intuitionistic fuzzy number and its arithmetic operations and it is a unique attempt made in which for the first time two basic generalized intuitionism fuzzy numbers namely generalized trapezoidal and generalized triangular intuitionists fuzzy numbers have been considered to serve the purpose.
Abstract: Intuitionistic fuzzy has always been a subject of keen interest, and a rigorous research has also been done on it. However, those research works were mainly based on normal intuitionistic fuzzy- a generalized approach to it could hardly be seen. So in this paper, we have developed a generalized intuitionistic fuzzy number and its arithmetic operations. It is a unique attempt made by us in which for the first time two basic generalized intuitionistic fuzzy numbers namely generalized trapezoidal and generalized triangular intuitionistic fuzzy numbers have been considered to serve the purpose. All arithmetic operations have been formulated on the basis of (α, β)-cut method, vertex method and extension principle method. Comparison among those three methods using an example is given and numerical results have been presented graphically. A new method is proposed to solve generalized intuitionistic fuzzy transportation problem (GIFTP) using ranking function. To validate the proposed method we have solved a GIFTP by assuming transportation cost, supply and demand of the product in generalized intuitionistic fuzzy numbers and the optimum results have been compared with the results of normal intuitionistic fuzzy transportation problem.

38 citations


Journal ArticleDOI
TL;DR: Possibility, necessity and credibility measures for Atanassov's intuitionistic fuzzy numbers for the first time have been developed here.
Abstract: In some practical situations the decision maker is interested in setting multi aspiration levels for objectives that may not be expressed in a specific manner. So in this paper, Atanassov's intuitionistic fuzzy transportation problem with multi-item, multi-objective function assuming multiple choices is considered. We have modeled multi-objective multi-choice multi-item Atanassov's intuitionistic fuzzy transportation problem (MMMIFTP), and its several special cases. Possibility, necessity and credibility measures for Atanassov's intuitionistic fuzzy numbers for the first time have been developed here. Solution methodology of those models using chance operator has been discussed. A real life example is presented to illustrate proposed models numerically and the results are compared. The optimal results are obtained by using three different soft computing techniques (i) Interactive satisfied method, (ii) Global criteria method and (iii) Goal programming method.

35 citations


Journal ArticleDOI
TL;DR: In this paper, the free vibration behavior of laminated composite stiffened elliptic parabolic shell has been analyzed in terms of natural frequency and mode shape, and the finite element method has been applied using an eight-noded curved quadratic isoparametric element with a three noded curved beam element.
Abstract: Abstract In this paper free vibration behavior of laminated composite stiffened elliptic parabolic shell has been analyzed in terms of natural frequency and mode shape. Finite element method has been applied using an eight-noded curved quadratic isoparametric element for shell with a three noded curved beam element for stiffener. Cross and angle ply shells with different edge conditions have been studied varying the size and position of the cutouts to arrive at a set of inferences of practical engineering significances.

26 citations


Proceedings ArticleDOI
05 Mar 2015
TL;DR: The final analysis on the speed-torque characteristics for the designed Brushless DC motor show quite satisfactory output for varying (0-20 N-m) load torque applications.
Abstract: This paper presents a mathematical model of a three-phase Brushless DC motor based on precise speed control methodology with ideal Back EMF on MATLAB/Simulink platform. In this scheme, equivalent output of three hall sensors determine the rotor position at any instant of time and accordingly switch the six step inverter to drive the motor. The model is based on phase voltage and electromagnetic torque equation. Further analysis has been carried out with transfer function to achieve desired level of performance. Moreover, the ideal Back EMF is also derived from mechanical speed and mechanical angle of the rotor. A wide variation of speed control is accomplished with a PI controller due to its simple control structure and ease of implementation. The final analysis on the speed-torque characteristics for the designed Brushless DC motor show quite satisfactory output for varying (0-20 N-m) load torque applications.

22 citations


Journal ArticleDOI
TL;DR: This paper has done an extensive simulation on ONE simulator with different categories of well-known existing DTN architectures, mobility models, DTN routing protocols to study the network performance and come up with a suitable energy efficient DTN framework combining suitable architecture and routing protocol that may be used to offer different disaster management services during post-disaster relief operation.
Abstract: In recent years, several investigations have been made on "challenged network" [also known as, delay tolerant network (DTN)] architectures highlighting their advantages and disadvantages. In spite of its inherent shortcomings of unreliability and delay, smart-phone based opportunistic network is gaining immense popularity in the research community due to its applicability in different adverse and extreme communication scenarios where traditional communication infrastructure is either unavailable or incapacitated for a long time. Considerable research has been conducted till date to design efficient network architecture for emergency data dissemination in intermittently connected/challenged networks. Different architectures (ranging from flat to multi-tier) are proposed for implementing opportunistic DTN keeping in view of specific application requirements. However, very few of these proposed architectures have been examined from the perspective of a post disaster communication. Moreover, most DTN architectures are designed by aiming towards increasing delivery probability and reducing latency and as a whole, maximizing network throughput; with very little emphasis on its energy efficiency. The impact of different mobility patterns on existing DTN routing algorithms with respect to data dissemination and energy consumption has also not been studied so far. Our objective, in this paper, is to investigate the impact of different mobility patterns on existing DTN routing protocols and existing DTN architectures and subsequently, come up with a suitable energy efficient DTN framework combining suitable architecture and routing protocol that may be used to offer different disaster management services during post-disaster relief operation. We have done an extensive simulation on ONE simulator with different categories of well-known existing DTN architectures, mobility models, DTN routing protocols to study the network performance in terms of delivery probability, energy efficiency and overhead ratio. Based on the simulation results, we have tried to figure out a suitable combination of energy efficient architecture, mobility model and routing protocol that fits well in the post disaster communication scenario.

21 citations


Proceedings ArticleDOI
01 Feb 2015
TL;DR: A modification of delta modulation for effective compression of PPG signal for real time measurements and monitoring applications is proposed.
Abstract: Photoplethysmography (PPG) is one of the prime non-invasive vascular assessment methodology adopted in modern clinical practice. This paper proposes a modification of delta modulation for effective compression of PPG signal for real time measurements and monitoring applications. Based on the energy content and fluctuations, the PPG signal was windowed and categorized into ‘complex’ or ‘plain’ zones. The algorithm applies an adaptive hard thresholding, run length encoding and selective biasing of the first difference array based on zonal complexity. The final stage employs a selective nibble combination or offsetting rule for further compression. For performance assessment of the compression-decompression algorithms, PPG data was collected from fingers of healthy volunteers using Biopac Systems®. An average compression ratio of 3.84, percentage root mean square difference of 5.82 and percentage root mean square difference normalized of 7.57 were obtained with 20 sets of volunteers' data at 10 bit resolution and 125 Hz sampling. The decompressed data were clinically validated by Cardiologists.

21 citations


Journal ArticleDOI
TL;DR: In this paper, a non-uniformly excited linear arrays are optimized using Taylor distribution and classical particle swarm optimisation (CPSO) algorithm for obtaining desired equal side lobe level (SLL).
Abstract: In this article, non-uniformly excited linear arrays are optimised using Taylor distribution and classical particle swarm optimisation (CPSO) algorithm for obtaining desired equal side lobe level (SLL). Elements of the array are considered to be isotropic in nature with uniform interelement spacing. Excitation amplitudes of each element are taken as optimisation parameters. Taylor distribution defines the range of excitation amplitude in which CPSO algorithm searches for the optimum value of excitation amplitude, with the objective of obtaining desired equal SLL. The proposed method eliminates the initial randomness of defining search space for CPSO algorithm. Comparison with other methods has been made whenever possible. The results reveal that the proposed method can be used to obtain the desired SLL.

18 citations


Proceedings ArticleDOI
01 Dec 2015
TL;DR: In this paper it is possible to detect whether a person is wearing a mask or not and the proposed system is also capable of counting the number of people present inside the ATM kiosk and generate a warning signal, thereby removing a constant human supervision.
Abstract: This paper presents an automated system to increase the security and surveillance of ATM kiosks. Due to the increase of robbery in ATM kiosks, it is important to employ an automated surveillance system to protect and secure the ATM machine from threats. Currently, a camera attached with the ATM unit, records and transmits the video feed to the main server of the bank. Around the clock, this manual surveillance utilizes a lot of bandwidth for transmission. There is waste of memory and late response to emergency situation. Consequently, early detection of the situation is necessary to take preventive measures against an ongoing burglary. In this paper it is possible to detect whether a person is wearing a mask or not. The proposed system is also capable of counting the number of people present inside the ATM kiosk and generate a warning signal, thereby removes a constant human supervision, reducing the storage of unnecessary video feed and transmitting only an anomalous situation, a faster response to a threat by shutting down the ATM machine as soon the system detects the threat.

Journal ArticleDOI
TL;DR: In this paper, a generalized hyperbolic model for predicting undrained pore-water pressure has been proposed as a function of cyclic stress ratio, frequency, and plasticity of soils using data reported by a number of researchers.
Abstract: Prediction of pore-water pressure is important for understanding the behavior of soils under both static and cyclic loading and also for estimation of their effective stresses Undrained cyclic pore pressure models available in the literature are for some typical soils and are hardly applicable for other soils In this paper, a generalized hyperbolic model for prediction of undrained pore-water pressure has been proposed as a function of cyclic stress ratio, frequency, and plasticity of soils using data reported by a number of researchers The proposed model has been validated by a chi-square goodness-of-fit test The correlation between the predicted and measured pore pressure ratio at various confining pressure regimes has also been statistically examined to discern the model The model output has been found to be correlated to an extent of up to 85% with the literature data

Proceedings ArticleDOI
01 Jan 2015
TL;DR: This article proposes a novel and efficient approach for text line segmentation where initially, an initial segmentation scheme is obtained by smudge the input document image, to blur-out white spaces between words, while preserving gaps between consecutive lines.
Abstract: Text line segmentation plays a vital role in the overall performance of a document recognition system. In the literature, similar segmentation works for offline handwritten Bangla documents are rarely found. On the other hand, certain peculiarities of handwritten Bangla script such as widespread occurrences of ascenders and descenders or some of its characters appearing only as an ascender or descender often cause unique difficulties to this segmentation task. Existence of connected components over a number of successive text lines is a common phenomenon in unconstrained handwritten Bangla documents. In this article, we propose a novel and efficient approach for text line segmentation where initially, we smudge the input document image, to blur-out white spaces between words, while preserving gaps between consecutive lines. Next, we obtain an initial segmentation scheme by shredding the image based on the white most pixels in between consecutive smudged lines. Multi-line connected components have been taken care of by thinning, and then finding the most probable point of separation in the component. Combining it with the initial segmentation, we obtain the final output. The proposed approach has been evaluated on ICDAR 2013 Handwriting Segmentation Contest dataset of Bangla. The segmentation results show the efficiency of the proposed approach.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: The focus of this paper is to design a low cost, reliable e-Health monitoring device using IoT based platform architecture that can measure the vital health parameters of an individual, securely store the data in server and analyze the data for generating proactive alerts as applicable.
Abstract: Lifestyle related ailment is resulting in a need for close medical attention for larger number of people. In this regard, systems that can provide proactive health related alert would be beneficial. e-Health and telemedicine offer powerful tools to improve efficiency in the delivery of healthcare services. The focus of this paper is to design a low cost, reliable e-Health monitoring device using IoT based platform architecture that can measure the vital health parameters of an individual, securely store the data in server and analyze the data for generating proactive alerts as applicable. The platform also connects the patient to the registered doctors who will have access to the patients' health data. Vital health parameters are measured using simple, non-restricting electrical sensors. The platform has outlier detection capabilities and is designed to send alerts to patients and their respective doctors in case any anomaly occurs. Thus, the proposed solution can be used to provide clinical healthcare at a distance, hence enabling the inclusion of elderly or homebound patients and those who are located in the remotest corners of the country within the realm of healthcare services.

Book ChapterDOI
01 Jan 2015
TL;DR: Quality Function Deployment (QFD) analytical tool is used to find different parameters from primary research data those affect the e-waste recycling practice as green computing approach and the result will help the stakeholders in implementing green Computing approach.
Abstract: Green computing is an environmentally responsible approach to reduce electronic waste and power consumption that helps in use of computing resources efficiently. With the increase in use of computer and other electronic devices the energy consumption and carbon footprint are also increasing. E-waste recycling is one of the important approaches towards green computing. This paper focuses on the approaches of green computing and how it minimizes the environmental impacts of computers and other electronic devices effectively by e-waste recycling. Quality Function Deployment (QFD) analytical tool is used to find different parameters from primary research data those affect the e-waste recycling practice as green computing approach. The result will help the stakeholders in implementing green computing approach.

Journal ArticleDOI
TL;DR: In this paper, the authors describe and illustrate an application of a structured approach to determine relative efficiency and ranking of a set of private engineering colleges under a multi-criteria environment.
Abstract: Purpose – Technical education plays an important role in the development of a country in this age of knowledge economy. Indian technical education system is facing many opportunities and challenges, one of which is how to assess the performance of technical institutions based on multiple criteria. The purpose of this paper is to describe and illustrate an application of a structured approach to determine relative efficiency and ranking of a set of private engineering colleges under multi-criteria environment. Design/methodology/approach – To cater to the increasing need of technical manpower, a very large number of private engineering colleges have been established in the state of West Bengal of eastern India within a very short period. Uniform and acceptable quality of the graduates from many of these private engineering colleges is a concern today and therefore the need for performance evaluation and ranking of these colleges is paramount. For the proposed framework a comparatively new multiple criteria...

Proceedings ArticleDOI
01 Oct 2015
TL;DR: The aim is to inject capacitances of required values when the power factor falls below the specified level, using Micro-controller to switch all the capacitors, which taken together is very close to the exact value of the capacitance.
Abstract: The chief objective is to improve the power quality by continuously monitoring the load power factor. When the load power factor falls below a certain value it results in the increase of line current, resulting in more line loss and greater voltage drop. Thus, the aim is to inject capacitances of required values when the power factor falls below the specified level. Primarily, a signal of pulse width proportional to the phase difference is generated. From the ON time period of each pulse the power factor can be determined. The exact value of the capacitance to be injected is then found out using some mathematics. Finally, the value of capacitance, so obtained, is to be approximated with the standard values of capacitance. Micro-controller will switch all the capacitors, which taken together is very close to the exact value of the capacitance.

Proceedings ArticleDOI
20 Apr 2015
TL;DR: Inspired by Ant colony based route selection algorithm, a volunteer-concentration based adaptive mobility of the observers is implemented and some improvement on the reputation collection mechanism is suggested so that global reputation matrix can be generated quickly in a sparse network.
Abstract: Researchers have proposed to use smart-phone based opportunistic networks for post-disaster communication where the smart-phones carried by the volunteers (also known as, nodes) are used to exchange the situational information from different corners of the disaster affected areas. In such scenario, some malicious nodes may try to intercept and manipulate the sensitive situational data with the intention of corruption and fraud. One way of preventing such corruption is to detect the malicious nodes based on their reputation and avoid them during data forwarding. In our earlier work [1], we proposed a dynamic reputation estimation technique where a group of trusted, independent, roving observer were assigned to specific affected zones. They randomly monitor, estimate the behavior of nodes in terms of their cooperation pattern as well as group-biasness and periodically publish a global node reputation matrix to help other nodes in selecting a suitable unselfish, unbiased and cooperative forwarder. In this paper, we suggested some improvement on the reputation collection mechanism proposed in our earlier work so that global reputation matrix can be generated quickly in a sparse network. In a post-disaster scenario, volunteers normally work around shelters; As a result, volunteers are found to be sparsely spread across the entire affected area. Observers, being unaware of the locations of volunteers or shelters, move around randomly within their designated zones to collect reputation of volunteers. Thus, the time taken to collect the reputation of all nodes in a specific zone is found to be quite large. Our objective, in this paper, is to adapt the mobility of an observer towards the more volunteer-rich areas in that zone so that maximum volunteers can be met in minimum time and the reputations may be collected quickly. Inspired by Ant colony based route selection algorithm we have implemented volunteer-concentration based adaptive mobility of the observers. The performance of the proposed scheme is evaluated using ONE simulator [13].

Journal ArticleDOI
TL;DR: In this paper, a two-echelon supply chain composed of one supplier and one retailer is analyzed and compared in accordance with Stakelberg and integrated approaches, and the optimal order quantity, selling price, promotional effort and service level are evaluated analytically as well as numerically for single period news-vendor-type demand patterns.
Abstract: This paper analyses a two-echelon supply chain composed of one supplier and one retailer. The market demand is assumed to be uncertain and considered to be retail price dependent and dependent on the supplier's service level and on the retailer's promotional effort. The unsold items at the retailer are repurchased by the supplier at a price less than the sales prices. Conversely, the retailer encounters shortages because the demand is naturally uncertain. The optimal order quantity, selling price, promotional effort and service level are evaluated analytically as well as numerically for single period news-vendor-type demand patterns. The profit functions of the supplier and the retailer are analysed and compared in accordance with Stakelberg and integrated approaches. Computational results show that an integrated system is always beneficial for the members of the chain.

Proceedings ArticleDOI
16 Mar 2015
TL;DR: An ideal back-EMF generation methodology has been designed and implemented in order to incline the system with the ideal one and a significant reduction in torque ripple is achieved.
Abstract: This paper describes a new mathematical model of permanent magnet brushless dc motor (PMBLDC) with sensorless commutated drive on MATLAB/Simulink based platform. The main challenges in sensorless BLDC commutation techniques compared with sensored drive lies in identification of first commutation point and minimization of torque ripple. In this work, an ideal back-EMF generation methodology has been designed and implemented in order to incline the system with the ideal one. As a result, a significant reduction in torque ripple is achieved. The characteristics of this model give satisfactory outputs over a wide range of controlled speed variation from 700 to 14000 rpm. Moreover, the torque and speed characteristic provide a high torque even at low speed. The strength of the sensorless commutation technique used here is established by performance analysis of the simulated design.

Book ChapterDOI
19 May 2015
TL;DR: A decomposition based Recommendation Algorithm using Multiplicatively Weighted Voronoi Diagrams is proposed to reduce the running time without compromising the recommendation quality much, allowing the Recommender Systems to tackle bigger datasets using the same resources.
Abstract: Collaborative Filtering (CF) technique is used by most of the Recommender Systems (RS) for formulating suggestions of item relevant to users’ interest. It typically associates a user with a community of like minded users, and then recommend items to the user liked by others in the community. However, with the rapid growth of the Web in terms of users and items, majority of the RS using CF technique suffers from the scalability problem. In order to address this scalability issue, we propose a decomposition based Recommendation Algorithm using Multiplicatively Weighted Voronoi Diagrams. We divide the entire users’ space into smaller regions based on the location, and then apply the Recommendation Algorithm separately to these regions. This helps us to avoid computations over the entire data. We measure Spatial Autocorrelation indices in the regions or cells formed by the Voronoi decomposition. One of the main objectives of our work is to reduce the running time without compromising the recommendation quality much. This ensures scalability, allowing us to tackle bigger datasets using the same resources. We have tested our algorithms on the MovieLens and Book-Crossing datasets. Our proposed decomposition scheme is oblivious of the underlying recommendation algorithm.

Journal ArticleDOI
20 Aug 2015-PLOS ONE
TL;DR: An algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology, and considers the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs.
Abstract: Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision.

Proceedings ArticleDOI
04 May 2015
TL;DR: A scheme for developing a coherent global view of the post-disaster situation using local situational information in a smart-phone based delay tolerant peer-to-peer network environment and a concept of “opportunistic knowledge injection” to disseminate local situational knowledge to other remote areas without significant network overhead is proposed.
Abstract: Situational awareness is a critical component in a post-disaster recovery operation for assessment of needs and identification of available resources at different parts of a disaster-affected area. This information, in turn, may help the relief agencies to appropriately coordinate, manage and channelize their resources. A major hindrance in developing such global situational awareness is the non-uniform interaction pattern of relief workers. Volunteers in a particular region have much better knowledge of the local situation than those belonging to regions further away. This information asymmetry leads to deviation in perceptions of volunteers working in different regions, thereby affecting the resource distribution process. Thus, a unified global situational view of the entire disaster affected area is essential to bridge the perception gap of volunteers and to help them develop a common understanding of the actual scenario. In this paper, we propose a scheme for developing such a coherent global view of the post-disaster situation using local situational information in a smart-phone based delay tolerant peer-to-peer network environment.We focus on generating a comprehensive view which is consistent for all workers irrespective of their location or mobility. The proposed scheme takes into account the spatial locality and spatial regularity properties of human mobility and uses a concept of “opportunistic knowledge injection” to disseminate local situational knowledge to other remote areas without significant network overhead. The effectiveness of the proposal is evaluated on the ONE simulator.

Proceedings ArticleDOI
04 Jan 2015
TL;DR: This proposed protocol adapts PRoPHET for post disaster group encounter based routing and enhances it by incorporating certain security elements into it to provide full security against possible attacks by malicious nodes in the network.
Abstract: In this paper, we propose SAGE-PRoPHET, a security enhanced PRoPHET routing protocol that enables secure dissemination of post disaster situational messages using history of group encounters. Post disaster rescue and relief operations are essentially group based, where volunteers and rescue workers, belonging to different rescue groups, relay situational information relevant to their group to their respective relief camps, in multiple hops on a peer-to-peer basis. Now, it is evidently better to route situational information, destined for a relief camp of a particular group, through volunteers of that group or who has a history of encountering volunteers of that group frequently. Such history of encounters based routing resembles the PRoPHET routing protocol for delay tolerant networks that forwards messages intended to a particular receiver through those nodes that encounter that receiver frequently. However, to use PRoPHET for such group based routing of group specific messages the protocol needs to be tuned to use history of group encounters rather than individual encounters. On the other hand, PRoPHET assumes that nodes in the network are trusted and cooperate towards message forwarding. Such assumption turns out inaccurate in presence of malicious nodes that may severely impede the delivery, accuracy and timeliness of situational messages. Therefore, integrating proper security components with PRoPHET is extremely important. Our proposed protocol adapts PRoPHET for post disaster group encounter based routing and enhances it by incorporating certain security elements into it to provide full security against possible attacks by malicious nodes in the network. Simulation results show that our proposed protocol, in a disaster scenario, offers better performance in comparison to other well known routing protocols.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the classical newsboy problem in the light of cooperative advertising policy in a two layer supply chain where the manufacturer and the retailer are the members of the chain.
Abstract: The paper investigates the classical newsboy problem in the light of cooperative advertising policy in a two layer supply chain where the manufacturer and the retailer are the members of the chain. As the shelf-life of newsboy products is very short, so advertising has immense importance for the newsboy problem. The prime objective of this paper is to introduce an advertising policy that will increase sales and eventually avoid the chance of overstocking of products. Another prospect of the advertising policy is that it will also help companies to attract a wider range of target customers by highlighting the different lucrative characteristics of their product. In this paper, an expected average channel profit has been calculated and maximized analytically. Numerical examples are considered in order to test the model and one of them is illustrated graphically. Finally, sensitivity analysis of the model with respect to key parameters of the system is carried out.

Journal ArticleDOI
TL;DR: In 2008, the benefits of team formation by auction first became apparent in the field of professional cricket and the Indian Premier League organized sequential single-item auctions to allocate 77 cri...
Abstract: In 2008, the benefits of team formation by auction first became apparent in the field of professional cricket. The Indian Premier League organized sequential single-item auctions to allocate 77 cri...

Proceedings ArticleDOI
16 Mar 2015
TL;DR: This work presents a scalable CF framework that extends the traditional CF algorithms by incorporating users context into the recommendation process by using the ratings of the target user as well as the rating history of the other users in that cluster.
Abstract: Recommender Systems (RS) are used to provide personalized suggestions for information, products and services that are not already used or experienced by a user, but are very likely to be preferred by him/her. Most of the existing RS employ variations of Collaborative Filtering (CF) for suggesting items relevant to users' interests. However, CF requires similarity computations that grows polynomially with the number of users and items in the database. In order to handle this scalability problem and speeding up the recommendation process, we propose a clustering based recommendation method. The proposed work utilizes the different user attributes such as age, gender, occupation, etc. as contextual features and then partitions the users' space on the basis of these attributes. We divide the entire users' space into smaller clusters based on the context, and then apply the recommendation algorithm separately to the clusters. This helps us to reduce the running time of the algorithm as we avoid computations over the entire data. In this work, we present a scalable CF framework that extends the traditional CF algorithms by incorporating users context into the recommendation process. While recommending to a target user in a specific cluster, our approach uses the ratings of the target user as well as the rating history of the other users in that cluster. One of the main objectives of our work is to reduce the running time without compromising the recommendation quality much. This ensures scalability, allowing us to tackle bigger datasets using the same resources. We have tested our algorithm on the MovieLens dataset, however, our recommendation approach is perfectly generalized. Experiments conducted indicate that our method is quite effective in reducing the running time.

Book ChapterDOI
01 Jan 2015
TL;DR: This paper presents an acceleration based gesture recognition approach with wearable MEMS tri-axial accelerometer, and introduces frame based lookup table for gesture recognition that reduces the hardware complexity but also minimizes the consumption of power by associated circuitry.
Abstract: This paper presents an acceleration based gesture recognition approach with wearable MEMS tri-axial accelerometer. In the application model, we have introduced frame based lookup table for gesture recognition. In accelerometer based gesture recognition concept, sensor data calibration plays an important aspect owing to their erroneous output due to zero-G error. In this work six-point based calibration of the sensor data is presented. The calibrated acceleration data so obtained from the sensor is represented in the form of frame-based signifier, to extract discriminative gesture information. It is observed that this procedure is always advantageous over conventional video image processing based gesture recognition that uses cameras and bulky computational algorithms. Thus, this accelerometer based gesture recognition not only reduces the hardware complexity but also minimizes the consumption of power by associated circuitry. Finally, this study helps us to develop a real time implementation of wearable gesture recognition device.

Proceedings ArticleDOI
24 Sep 2015
TL;DR: Simulation results show that proposed approach to track the Maximum Power Point (MPP) of a solar cell module can obtain MPP to a good precision under different solar irradiance and environmental temperatures.
Abstract: Main objective of this paper is to develop an intelligent and efficient Maximum Power Point Tracking (MPPT) technique. Two most recently introduced and popular swarm intelligent based algorithms: Firefly algorithm (FA) and Artificial Bee Colony (ABC) has been used in this study to develop a novel technique to track the Maximum Power Point (MPP) of a solar cell module. The performances of two algorithms in this context have been compared with other popular evolutionary computing techniques like PSO, DE and GA. Simulations were done in MATLAB/SIMULINK environment and simulation results show that proposed approach can obtain MPP to a good precision under different solar irradiance and environmental temperatures.

Proceedings ArticleDOI
20 Apr 2015
TL;DR: A location based mobility prediction scheme that helps in selecting the appropriate forwarder by predicting the mobility pattern of nodes by approximate the periodicity of the DTN node mobility and use that knowledge to facilitate forwarding.
Abstract: Selection of next hop forwarder plays a pivotal role in timely and accurate dissemination of post disaster situational data to a predetermined destination. Prior knowledge about the probability of future presence of a node near the destination eases this process significantly. In this paper, we propose a location based mobility prediction scheme that helps in selecting the appropriate forwarder by predicting the mobility pattern of nodes. Researchers, over a considerable period, have proposed the use of DTN in setting up a post disaster communication network. DTN specific user mobility involves both periodic and slightly chaotic patterns; chaotic behavior being attributed to the sudden causal events triggering instantaneous node mobility. In our approach, we approximate the periodicity of the DTN node mobility and use that knowledge to facilitate forwarding. Each mobile node in this approach is expected to periodically sample its own position in terms of time-location pair. This information is shared by other nodes upon contact. The sampled data, from other nodes, are extrapolated for future time instances to predict the possible locations of the mobile nodes for the next few points in time. So, the node having minimal distance around the vicinity of the destination, in some future time, qualify as the next forwarder. We compare the results, thus obtained, with real location of the nodes in future mentioned time instances and simulation results show that our scheme provides satisfactory results in predicting mobility of nodes to a great extent.