scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Systems Assurance Engineering and Management in 2020"


Journal ArticleDOI
TL;DR: Some of the existing neural network's techniques that are used to process image data with prominence on detecting crop diseases are reviewed to allow future research to learn larger capabilities of deep learning while detecting plant diseases by improving system performance and accuracy.
Abstract: Automatic identification of diseases through hyperspectral images is a very critical and primary challenge for sustainable farming and gained the attention of researchers during the past few years. The technologies proposed, and techniques adopted so far are slighted in their scope and utterly contingent on deep learning models. The performance of convolutional neural networks is emerging as the most powerful tool to diagnose and predict the infections from the crop images. The present article has reviewed some of the existing neural network's techniques that are used to process image data with prominence on detecting crop diseases. First, a review of data acquisition sources, deep learning models/architectures, and different image processing techniques used to process the imaging data provided. Second, the study highlighted the results acquired from the evaluation of various existing deep learning models and finally mentioned the future scope for hyperspectral data analysis. The preparation of this survey is to allow future research to learn larger capabilities of deep learning while detecting plant diseases by improving system performance and accuracy.

83 citations


Journal ArticleDOI
TL;DR: The proposed article provides seven fully worked examples with screenshots of output summaries from the software used in the computations for better understanding and some managerial applications and the advantages of the proposed approach are given.
Abstract: In this article, the crisp, fuzzy and intuitionistic fuzzy optimization problem is formulated. The basic definitions and notations related to optimization problems are given in the preliminaries section. Algorithms for solving the optimization problems using fuzzy and intuitionistic fuzzy set is presented in this article. Then, with the help of the proposed algorithm the optimal solution of the crisp, fuzzy and intuitionistic fuzzy optimization problems are determined. A new theorem related to type-2 fuzzy/type-2 intuitionistic fuzzy optimization problems is proposed and proved. Some new and concrete results related to type-2 fuzzy/type-2 intuitionistic fuzzy optimization problems are presented. To illustrate the proposed method, some real-life numerical examples are presented. The proposed article provides seven fully worked examples with screenshots of output summaries from the software used in the computations for better understanding. The advantages of the proposed approach as compared to other existing work are also specified. Detail analyses of the comparative study as well the discussion are given. To show the advantages of the proposed approach, superiority analysis is discussed. Comparison analysis and the advantages of the proposed operators are also discussed. Some managerial applications and the advantages of the proposed approach are given. Finally, conclusion and future research directions are also given.

51 citations


Journal ArticleDOI
TL;DR: This study has identified nine significant strategic thinking enablers and developed an integrated model using modified TISM and the MICMAC approach, which is useful to classify and identify the critical strategic thinking Enabler and shows the indirect and direct consequences of all strategic thinkingEnablers on the implementation of strategic thinking.
Abstract: The purpose of the present paper is to investigate the enablers of strategic thinking and establish a relationship along with them by applying modified total interpretive structural modeling (TISM) and matrices impacts croises multiplication appliquer classement (MICMAC) analysis. This paper also analyzes the enablers’ dependence and driving power. This study has identified nine significant strategic thinking enablers and developed an integrated model using modified TISM and the MICMAC approach, which is useful to classify and identify the critical strategic thinking enablers and shows the indirect and direct consequences of all strategic thinking enablers on the implementation of strategic thinking. The weightage for modified TISM and MICMAC analysis are obtained by the opinion of some experts of industry and academics. This research has practical, substantial consequences, for both academicians and practitioners. The senior manager could take the strategic decision for implementing strategic thinking enablers obtained by modified TISM, and MICMAC analysis, and practitioners need to emphasize on these enablers in organizational practices. This study is the first kind of research to identify nine strategic thinking enablers and apply modified TISM and MICMAC to classify and identify the critical strategic thinking enablers which influence strategic thinking implementation in the organization. This paper would help the organization to evaluate and challenges the current management system and promote new enablers for better implementation of strategy thinking.

38 citations


Journal ArticleDOI
TL;DR: Stakeholders will need to work on four areas to ensure successful mass adoption of AVs and decrease the chance of failure, which could be sorted into four readiness categories: technology (vehicle technology, safety and ethics), infrastructure, legal, legal and user acceptance and useraccept.
Abstract: Autonomous vehicles (AVs) are the latest trend in the automobile industry. Although the concept has existed since the beginning of the last century, recent technological advancements have enabled the industry to attempt mass introduction of driverless vehicles. This paper reviews the literature to explore those factors that influence the successful adoption of AVs. Previous literature in the field was gathered and analyzed using a categorization method to identify factors that are essential to the successful adoption of AVs. A total of 14 factors. These factors were identified from 85 articles published in various journals such as: transportation research part A: policy and practice, rand corporation, yale journal of law and technology in the literature, which could be sorted into four readiness categories: technology (vehicle technology, safety and ethics), infrastructure (communication, technology of roads and traffic signs and cost of infrastructure), legal (liability, privacy and cybersecurity) and user acceptance (consumer acceptance, marketing and advertising, cost of AVs and trust). Stakeholders will need to work on these four areas to ensure successful mass adoption of AVs and decrease the chance of failure. The paper also recommends future research on these four categories and their 14 success factors to provide additional information on how the government can adopt autonomous driving.

37 citations


Journal ArticleDOI
TL;DR: The findings indicate that EWOM, WQ, and PS are positively associated with ECS as well as RI, and e-tailers need to be aware about the options related to S&H at their website rather than blindly rely on WQ and information quality of EWOM.
Abstract: The notion of website quality (WQ) and EWOM communications has received considerable attention in both online businesses and research communities. But, product satisfaction (PS) and shipping and handling (S&H) not yet adequately investigated in the existing studies related to e-shopping. By integrating WQ, EWOM, and PS, we have developed a path model for analyzing their effect on satisfaction (ECS) and repurchase intention (RI) of electronic commerce customer. Later on, we analyze the moderating role played by S&H with EWOM and WQ on ECS. The findings indicate that EWOM, WQ, and PS are positively associated with ECS as well as RI. We further noted that e-tailers need to be aware about the options related to S&H at their website rather than blindly rely on WQ and information quality of EWOM. This study helps the marketers to understand the key role played by WQ and EWOM along with S&H in ECS levels.

36 citations


Journal ArticleDOI
TL;DR: An integrated framework based on both the ‘Technology–Organization–Environment (TOE) framework’ and ‘Diffusion of Innovation (DoI) theory’ is described, confirming that the TOE–DoI approach to higher education in Ethiopia is authenticated.
Abstract: This research paper describes an integrated framework based on both the ‘Technology–Organization–Environment (TOE) framework’ and ‘Diffusion of Innovation (DoI) theory’. The study explores the noteworthy factors and sub-factors which are pertinent for the adoption of cloud computing in the Ethiopian Higher Education (EHE) sector. The technology literature herein was based on technology adoption frameworks and theories which were studied in order to identify a set of factors and sub-factors relevant to cloud computing adoption. It resulted in conceptualizing an integrated TOE–DoI framework for cloud computing adoption in higher education at the university in Ethiopia and in developing its reliable measures. Accordingly, a quantitative study was done with a questionnaire survey comprising 500 respondents in connection with 4 factors (technological, organizational, environmental and socio-cultural). Consequently, the cloud computing adoption in Ethiopia was established using factors and concepts adopted from the study. It confirmed that the TOE–DoI approach to higher education in Ethiopia is authenticated. Thus, the four factors’ reliability statistics validated with a Cronbach’s alpha a = 0.739, 0.712, 0.761, 0.841, and Cronbach’s alpha ‘a’ based on standard items a = 0.740, 0.713, 0.762 and 0.842 for technology, organizational, environmental, and socio-cultural factors. This indicates that scaling the four aspects therein suggests profound evidence to determine a cloud computing adoption in EHE with TOE–DoI integration.

35 citations


Journal ArticleDOI
TL;DR: A trust based model to detect rogue nodes in a vehicular network that selects only trustworthy nodes to relay the data in the routing process and enhances network performance significantly.
Abstract: Due to the exponential growth in the automobile industry, we need intelligence transportation system. Vehicular ad-hoc network (VANET), a part of the intelligence transportation system is the network created by vehicles. Security is the main issue in vehicular ad-hoc network. Many intruders try to use the vulnerability presents in the vehicular network. In VANET communication between two nodes, may involves multiple intermediate nodes to forward the data due to low transmission range. The intermediate nodes must be trustworthy enough to be a part of the communication process. Rogue or malicious nodes can accept the data and drop the data in between source to destination. In this paper, we proposed a trust based model to detect rogue nodes in a vehicular network. The proposed model first estimates the trust value of the nodes and based on that identifies the rogue nodes in the network. We select only trustworthy nodes to relay the data in the routing process. The simulation and performance evaluation of the proposed model performed with the help of network simulator (NS-2). We evaluate the performance of the network based on the four performance matrices i.e. successful packet delivery fraction, throughput, routing load and end to end delay. The simulation result shows that the proposed model enhances network performance significantly.

30 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel blockchain model that can be adopted by governments to establish government-led blockchain ecosystem for government services based on the outcomes of their analysis of permissioned blockchain platform, and analysis of blockchain powered housing rentals use case that is being implemented by the Dubai Government.
Abstract: Blockchain, a Distributed Ledger Technology, is a disruptive and revolutionary technology that enables transacting data in a decentralized structure without the need for trusted central authorities. Many industries have taken steps to unleash the potential of blockchain technology, including the government sector, and early adopters are exploring the use of blockchain as one of the critical capabilities needed to create new business models and to radically transform government services and functions. However, despite this interest in adopting Blockchain technology, there is a lack of Blockchain frameworks or reference models/architecture that addresses government services in literature. In this paper, we propose a novel blockchain model that can be adopted by governments to establish government-led blockchain ecosystem for government services. The blockchain model is proposed based on the outcomes of our analysis of permissioned blockchain platform, and analysis of blockchain powered housing rentals use case that is being implemented by the Dubai Government. The proposed blockchain model outcomes includes a blockchain governance structure, participants and roles definition, and network architecture design which explains the deployment options and components. We also explain the model lifecycle and blockchain services, and we shed the light on the security and performance of the model. The study also includes exploring many blockchain use cases used by governments through proofs of concept or prototypes. Our analysis of Hyperledger Fabric design features shows the platform’s relevance to government services and use cases. The Hyperledger Fabric analysis also identifies the platform’s actors, services, processes and data structure of Hyperledger Fabric.

30 citations


Journal ArticleDOI
TL;DR: Ann model to predict metal removal rate (MRR) and surface roughness (Ra) values for machining AISI 1045 steel reveals that the architecture (3-5-1) of ANN models is the best architecture to predict the Ra and MRR with about 98.136% and 97.3% accuracy respectively.
Abstract: Wire electrical discharge machining (WEDM) process used in a wide spectrum of industrial applications. AISI 1045 is medium carbon steel, because of its excellent physical and chemical properties, it is used in many applications. However, the review of the state of the art literature reveals that literature is lacking research to optimize WEDM process for machining AISI 1045 steel. The objectives of this research are building ANN model to predict metal removal rate (MRR) and surface roughness (Ra) values for machining AISI 1045 steel, identifying the significance of the pulse on-time (TON), pulse off time (TOFF) and servo feed (SF) for the MRR and Ra, and selecting optimal machining parameters that give maximum MRR value and that give the minimum Ra value. Taguchi method (Design of Experiments), artificial neural network (ANN), and analysis of variances (ANOVA) used in this research as a methodology to fulfill research objectives. This research reveals that the architecture (3-5-1) of ANN models is the best architecture to predict the Ra and MRR with about 98.136% and 97.3% accuracy respectively. It can be realized that TON is the most significant cutting parameter affecting Ra by P % value 42.922% followed by TOFF with a P % value of 24.860%. SF was not a significant parameter for Ra because of Fα > F. For MRR, the most significant parameter is TON with a P % value of (71.733%), i.e. about three times the TOFF P % value (21.796%) and the SF parameter has a small influence with P % value 3.02%. The analysis confirmed that the optimal cutting parameters for maximum MRR were: TON at the third level (25 µs), TOFF at the first level (20 µs), and SF at the third level (700 mm/min). On the other hand, the optimal cutting parameters for minimum Ra were: TON at the first level (10 µs), TOFF at the third level (40 µs), and SF at the first level (500 mm/min). Future work may focus on optimizing the WEDM process for machining other types of materials or other sets of parameters and performance measures.

28 citations


Journal ArticleDOI
TL;DR: A systematic review of the literature on liability with AVs concludes that liability would depend on the particular details of a situation, the role of the driver, the level of autonomy exercised by the vehicle, and the environmental factors among other considerations.
Abstract: The introduction of autonomous vehicles (AVs) has led to a shift in liability from driver to AV. This paper is a systematic review of the literature on liability with AVs. Articles were selected from EBSCO, SCOPUS and PROQUEST and then sorted based on inclusion and exclusion criteria. The findings reveal that most such articles have been published in law journals and transport journals. The articles mainly discuss the steps being taken by developed countries, such as the US, the UK and Germany; the modification of existing laws; and the formulation of new laws. The articles also address the shift in liability from humans to AVs and affirm that there is no general rule on liability for AVs. Researchers conclude that liability would depend on the particular details of a situation, the role of the driver, the level of autonomy exercised by the vehicle, and the environmental factors among other considerations. Earlier papers have also suggested that ethical considerations should be studied when dealing with liability. Further research needs to be done on how users perceive liability and how liability evolves with the considerable changes in the law.

27 citations


Journal ArticleDOI
TL;DR: The development of WOA_SVM is provided to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal, using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model.
Abstract: Medical image processing technique are widely used for detection of tumor to increase the survival rate of patients. The development of computer-aided diagnosis system shows improvement in observing the medical image and determining the treatment stages. The earlier detection of tumor reduces the mortality of lung cancer by increasing the probability of successful treatment. In this paper, the intelligent lung tumor diagnosis system is developed using various image processing technique. The simulated steps involve image enhancement, image segmentation, post-processing, feature extraction, feature selection and classification using support vector machine (SVM) kernel. Gray level co-occurrence matrix method is used for extracting the 19 texture and statistical features of lung computed tomography (CT) image. Whale optimization algorithm (WOA) is considered for selection of best prominent feature subset. The contribution provided in this paper is the development of WOA_SVM to automate the aided diagnosis system for determining whether the lung CT image is normal or abnormal. An improved technique is developed using whale optimization algorithm for optimal feature selection to obtain accurate results and constructing the robust model. The performance of proposed methodology is evaluated using accuracy, sensitivity and specificity and obtained as 95%, 100% and 92% using radial bias function support vector kernel.

Journal ArticleDOI
TL;DR: The results show that the proposed ELF model has attained better generalization and outperform the existing load forecasting models based on the shallow neural network, ensemble tree bagger and generalized linear regression.
Abstract: In recent, smart grid has emerged as a promising technology to facilitate the future electric power grid and to balance between supply and demand. However, the intermittent nature of distributed energy resources causes dynamic uncertainties and nonlinearity in the smart grid environment. This may result in a large stress on power grid and has a big influence on energy planning, especially the generation and distribution. Therefore, energy load forecasting plays an important role in facilitating the operation of the future smart grid. Using the traditional statistical and machine learning approach there exists a significant forecasting error and high degree of overfitting. In this paper, we propose an energy load forecasting (ELF) model based on deep neural network architectures to manage the energy consumption in smart grids. First we investigate the applicability of two deep neural network architectures: deep feed-forward neural network (deep-FNN) and deep recurrent neural network (deep-RNN). To evaluate the models with low error, we simulate both architectures with multi size training set. Further, various activation functions and different combinations of hidden layer architectures are also tested. The simulation results are compared in terms of mean absolute percentage error. The results show that the proposed ELF model has attained better generalization and outperform the existing load forecasting models based on the shallow neural network, ensemble tree bagger and generalized linear regression.

Journal ArticleDOI
TL;DR: A simple and novel algorithm is presented by making use of artificial bee colony algorithm in the field of data flow testing to find out and prioritize the definition-use paths which are not definition-clear paths.
Abstract: Meta-heuristic Artificial Bee Colony Algorithm finds its applications in the optimization of numerical problems. The intelligent searching behaviour of honey bees forms the base of this algorithm. The Artificial Bee Colony Algorithm is responsible for performing a global search along with a local search. One of the major usage areas of Artificial Bee Colony Algorithm is software testing, such as in structural testing and test suite optimization. The implementation of Artificial Bee Colony Algorithm in the field of data flow testing is still unexplored. In data flow testing, the definition-use paths which are not definition-clear paths are the potential trouble spots. The main aim of this paper is to present a simple and novel algorithm by making use of artificial bee colony algorithm in the field of data flow testing to find out and prioritize the definition-use paths which are not definition-clear paths.

Journal ArticleDOI
TL;DR: Main objective of the paper is to develop a production inventory model under those assumptions considering various cost parameters as interval numbers, and quantum behaved particle swarm optimization technique has been applied to find the maximum profit in a single cycle.
Abstract: The motto of this paper is to develop a production inventory model in real life situation. In reality, demand of a certain product becomes highly effected by on-hand stock level in stores and the selling price of the product. In the proposed model, demand of the product is dependent on these two vital deceive factors. In real life production systems, always there are some defective products which requires reworking in order to make them useful. The possibility of producing some defective items in regular production process and their reworking has been taken into account in the model. In case of inventories of highly demandable products, it is observed that production rate is proportional to demand. The situation also arises in case of launching a new product or in a multi-stage production system. In this model, production rate is a variable and it varies with the demand rate. Shortages are allowed and it is backlogged fully. Based on these assumptions, several researchers worked on but they considered that the associated inventory cost parameters are fixed real numbers. However, these are not fixed in reality and may vary time to time depending upon some scenario. Main objective of the paper is to develop a production inventory model under those assumptions considering various cost parameters as interval numbers. As a result the corresponding optimization problem is also interval valued. Quantum behaved particle swarm optimization technique has been applied to find the maximum profit in a single cycle. Numerical example and sensitivity analysis are given to illustrate the proposed inventory model.

Journal ArticleDOI
TL;DR: The paper analyzes the use of different AI techniques like fuzzy logic, Artificial Neural Networks, Particle Swarm Optimization and Fuzzy Neural in critical health scenario and finds the relevance of various techniques of AI in different critical health scenarios.
Abstract: The role of a healthcare practitioner is to diagnose a disease and find an optimum means for suitable treatment. This has been the most challenging task over the years. The researchers have been developing intelligent tools for providing support in taking medical decision. The application of AI in different health scenario strengthen the mechanism for finding a better treatment plan. The authors share some recent advancements in this domain. The role of artificial intelligence in Indian healthcare system has also been discussed. The paper analyzes the use of different AI techniques like fuzzy logic, Artificial Neural Networks, Particle Swarm Optimization and Fuzzy Neural in critical health scenario. A detail literature review has been provided in this context. The disease taken into consideration are cancer, TB, diabetes, malaria, orthopedics, obesity and disease specific to elderly people. The purpose of this article is to find the relevance of various techniques of AI in different critical health scenarios. A comparative analysis is done based on the publications since 1995. The challenges and risks associated with the usage of AI in healthcare has been analysed and suggestions made for making the analysis in the health domain more accurate and effective. Further the concept of deep learning has also been explained and its inculcation with the medical domain is discussed.

Journal ArticleDOI
TL;DR: The present study suggests an integrated approach for determining the optimal release policy for the Mula reservoir situated at the Godavari basin, India using a hybridization of Dynamic Programming and Particle Swarm Optimization.
Abstract: The present study suggests an integrated approach for determining the optimal release policy for the Mula reservoir situated at the Godavari basin, India. The proposed integrated algorithm named DP-PSO is a hybridization of Dynamic Programming (DP) and Particle Swarm Optimization (PSO). The reservoir operation problem is demonstrated in the form of a nonlinear optimization model subject to various constraints. Two case studies are considered. In the first case the efficiency of the proposed algorithm is tested on a small data set of 1 year and in the second case, data set taken is for 10 years. The results obtained are compared in terms of objective function value as well CPU time for performance evaluation of the integrated methods.

Journal ArticleDOI
TL;DR: The proposed BPM methodology can provide a guide for managers and organization leaders about the right steps to be followed during implementing BPM and conducting BPM projects and can be tested by applying it to the different business sectors and measure organizations’ performance during implementation stages.
Abstract: Business process management (BPM) is one of the effective performance management methodologies used in managing process-oriented organizations. BPM was adopted by many organizations and impressive results were achieved. However, BPM still in its infancy and many issues yet to be resolved. Having a unified list of the BPM critical success factors (CSFs) and BPM principles considered one of the important research areas. The literature review showed that despite the majority of BPM principles and CSFs are the same or there are minor differences between them, different terminologies were used to describe them. The literature review showed also that the methodologies used for BPM implementation were either very old, it does not cover the human interaction with BPM systems (BPMS), or it describes BPM methodology partially. Therefore, the main objectives of this research are building insights about BPM’s most recent developments, unifying BPM principles and CSFs and proposing a comprehensive BPM. The literature was analyzed to extract BPM’s developments. Moreover, the mapping process was used to propose a unified list of BPM CSFs and BPM principles. A critical success principles (CSPs) name was given to the unified list. 22 CSPs were identified, based on the nature of CSPs and implementation level, CSPs were classified into three main areas or levels namely strategic, supportive and operational CSPs. Out of 22 CSPs, 36.5% of CSPs were classified as a strategic CSPs, 45.5% of CSPs were classified as operational CSPs, and only 18% were classified as a supportive CSPs. A comprehensive BPM methodology was proposed, the proposed methodology combining the steps of generic BPM methodologies and BPMS methodologies and it is unifying the terminologies used in the reviewed methodologies. Pros and cons of the proposed methodology discussed in this research. The implications of this research can be seen in both theoretical and practical sides. For the theoretical side, the researchers can see the most recent developments in the scope of BPM, summarized in this research, and build on it to conduct future research. The proposed CSPs can be further analyzed to find the relationship between CSPs and how each one of them can affect BPM implementation. The proposed BPM methodology can be tested by applying it to the different business sectors and measure organizations’ performance during implementation stages. For the practical side, the proposed methodology can provide a guide for managers and organization leaders about the right steps to be followed during implementing BPM and conducting BPM projects. Moreover, CSPs can guide BPM project managers, organization leaders, and business excellence units to focus their efforts on the significant improvement areas and actions to be taken on strategic and operational levels.

Journal ArticleDOI
TL;DR: A systematic review on existing literatures is presented in this paper, which focuses on the management of business process compliance requirements in order to present summarized evidences and provide a lead-up for appropriately positioning new research activities.
Abstract: One crucial aspect that had cost business organizations so much is management of compliance requirements from various regulatory sources. In a bid to avoid being penalized, some organizations have adopted various techniques to accomplish this task. However, literature revealed that few thorough reviews have been centered on this subject in a systematic way. This implies that a review that systematically captured the entire crucial elements such as implementation environment, constraints types addressed, main contributions and strengths of the existing techniques is missing. This has led to the lack of sufficiently good context of operation. A systematic review on existing literatures is presented in this paper, which focuses on the management of business process compliance requirements in order to present summarized evidences and provide a lead-up for appropriately positioning new research activities. The guideline for conducting systematic literature review in software engineering by Kitchenham was employed in carrying out the systematic review as well as a review planning template to execute the review. Results showed that control flow and data flow requirements have been addressed most in recent time. The temporal and resource allocation requirements have been under researched. The approaches that have been employed in business process compliance requirements management are model checking, patterns, semantic, formal, ontology, goal-based requirements analysis and network analysis. The traditional business environment has been put into consideration more than the cloud environment. The summary of research contributions revealed that the approaches have been more of formal techniques compared to model checking and semantics. This shows that there is a need for more research on business process compliance that will be centered on the cloud environment. Researchers will be able to suggest the technique to be adopted based on the combined importance of each criterion that was defined in this work.

Journal ArticleDOI
TL;DR: Performance of presented work has been tested both quantitatively and qualitatively by using different fidelity parameters like; peak signal-to-noise ratio, structural similarity index and normalized correlation coefficient.
Abstract: Need of trusted and highly secured copyright protection techniques to prevent the illegal copying and sharing of digital data are highly required in recent years due to rapid development in internet technology. Here, lossless robust color image watermarking scheme using lifting scheme with grey wolf optimization (GWO) for copyright protection has been proposed. The insertion of color watermark has been done into the host image by changing the singular values of the host/cover image using optimized strength factor; alpha. To increase the security and robustness, scrambling of watermark has also been carried out by using Arnold transform along with GWO optimization technique in the presented work. Performance of presented work has been tested both quantitatively and qualitatively by using different fidelity parameters like; peak signal-to-noise ratio, structural similarity index and normalized correlation coefficient. The presented work has also been tested against various artificial attacks along with comparison with considered image watermarking techniques.

Journal ArticleDOI
TL;DR: Practical approach have been applied for the FMECA to determine the critical equipments in a super thermal power plant and condition monitoring of such equipments have been done.
Abstract: In the recent years, there is a trend to build super thermal power plants in order to have better economic viability. With the growth of the capacity and size, the complexities of these plants have also grown multifold. There is more chance of fault in the system, when it is more complex. An early detection of these faults can allow time for preventive maintenance before a severe failure occurs. Condition monitoring is implementation of the advanced diagnostic techniques to reduce downtime and to increase the efficiency and reliability. The research is for determining the usage of advanced techniques like Vibration analysis and Oil analysis and to diagnose ensuing problems of the plant and machinery at an early stage and plan to take corrective and preventive actions to eliminate the issue and enhancing the reliability of the system. Now days, most of the industries have adopted the condition monitoring techniques as a part of support system to the basic maintenance strategies. Failure Mode, Effect and Criticality Analysis (FMECA) is associated with condition monitoring to determine the criticality of such unit or machines. It is a design method used to systematically analyze probable component failure modes of product or process, assess the risk associated with these failure modes and find out the resultant effects on system operations. In this study, practical approach have been applied for the FMECA to determine the critical equipments in a super thermal power plant and condition monitoring of such equipments have been done.

Journal ArticleDOI
TL;DR: R Rutting and fatigue performance parameters obtained through the MSCR and LAS tests, respectively, indicated that the addition of the DSO to the asphalt binder increases the nonrevocable creep compliance (J nr ) that leads to a reduction in the rutting performance.
Abstract: As the construction industry drives towards sustainability, all construction sectors, including the asphalt pavement, are suggested to explore more environmental approaches in their developments. Despite that the demand for oil is continuously increasing, the production of the worldwide oil is started to diminish due to the depletion of the resources. One of the most promising ways to address this issue is to produce bio-binders or bio-oils from biomass resources that can completely or partially replace the petroleum asphalt binders. This study investigates the effect of bio-oil extracted from local date seeds using Soxhlet method on the performance of asphalt binder used in construction of asphalt roadways. Asphalt mastic was prepared using three volume ratios of 0.0%, 1.5%, and 2.5% of date seed oil (DSO) and one asphalt binder with penetration grade of 60/70 (PG 64–16). Multiple rheological and advanced tests were conducted at different temperatures to assess the different performance parameters and compare it to the Superpave criteria and specifications. The findings of this research study show that the addition of the bio oil to the asphalt binder increases the penetration; however, both softening point and viscosity decreased with the addition of bio oil. On the other hand, the temperature susceptibility and the high-temperature performance grade (PG) of the modified asphalt binder dropped down. Rutting and fatigue performance parameters obtained through the MSCR and LAS tests, respectively, indicated that the addition of the DSO to the asphalt binder increases the nonrevocable creep compliance (Jnr) that leads to a reduction in the rutting performance. However, the traffic level was enhanced to tolerate heavier traffic than the conventional asphalt binder despite the dramatic increase in the strain. The fatigue resistance of the modified asphalt binder was improved explained by the increase in the damage intensity and the number of cycles to failure.

Journal ArticleDOI
TL;DR: This paper analyzes the existing defence systems against ARp attacks and proposes three different techniques for detecting and preventing the ARP attacks to ensure security of traditional ARP and its impact in Medical computing where a single bit inversion could lead to wrong diagnosis.
Abstract: Network utilization reached its maximum level due to the availability of high-end technologies in the least cost. This enabled the network users to share the sensitive information like account details, patient records, genomics details for biomedical research and defence details leading to cyber-war. Data are vulnerable at any level of communication. The link-layer Address Resolution Protocol (ARP) is initiated for any data communication to take place among the hosts in a LAN. Because of the stateless nature of this protocol, it has been misused for illegitimate activities. These activities lead to the most devasting attacks like Denial of Service, Man-in-the-Middle, host impersonation, sniffing, and cache poisoning. Though various host-based and network-based intrusion detection/prevention techniques exist, they fail to provide a complete solution for this type of poisoning. This paper analyzes the existing defence systems against ARP attacks and proposes three different techniques for detecting and preventing the ARP attacks. The three techniques ensure security of traditional ARP and its impact in Medical computing where a single bit inversion could lead to wrong diagnosis.

Journal ArticleDOI
TL;DR: It is concluded that by the end of the first decade of the twentifirst century all the northern regions of Russia have not reached a level of development at which there were 18 industrialized countries in the early 1960s.
Abstract: The socio-economic and technological development of the northern regions of Russia, in the context of the modernization of the world is analysed. The regions are particularly vulnerable to external shocks because they do not have their own development resources. The modernization is seen as a sequence of primary and secondary stages of upgrading or modernization of an integrated carried out in developing countries. The absolute and relative modernization of indicators is used to determine the ranking of the region in the global context. The absolute value of the index is the indicator value of the developed European countries. It is concluded that by the end of the first decade of the twentifirst century all the northern regions of Russia have not reached a level of development at which there were 18 industrialized countries in the early 1960s.The knowledge economy in Moscow (86% of the level in developed countries), St. Petersburg (78%), the Nizhny Novgorod region (68%) is developed. All other regions of Russia are far from the three leaders. The divergence of indicators modernization is increasing. The growth rate of Arctic regions was in the two times higher than the world over the last decade, but the development of the northern regions of Russia lags behind and is concentrated in a small number of regions receiving income from the unusually high oil prices. In connection with this made possible the growth of the social component of modernization, but technological and institutional modernization was not happen.

Journal ArticleDOI
TL;DR: It is proposed to use indicators reflecting the state of the ecological and socio-economic spheres of life, as the most important areas of human activity in the Arctic zone, for the development of an information-logical model for describing the Arctic as a complex structured object.
Abstract: The article theoretically substantiates and analyzes the indicators for assessing the spheres of human activity for the development of an information-logical model for describing the Arctic as a complex structured object. It is proposed to use indicators reflecting the state of the ecological and socio-economic spheres of life, as the most important areas of human activity in the Arctic zone. Statistical information on these indicators has been collected and systematized in the context of forty-three municipalities belonging to the Russian Federation Arctic zone. Compiled graphical models that reflect the dynamics of the considered indicators over the past 10 years assessed their trend. The information-logical model for describing the Arctic as a complexly structured object is a combination of interdependent model parameters, each of which can simultaneously act as an influencing variable, as well as a marker of the current state of the municipality. The Arctic as a complexly structured object is a complex system of organized human activity and can be presented in the form of information and communication fields relating to various spheres of human activity: demographic (demo-social), natural, industrial, economic, innovative-technological, social, political, and spiritual. The article highlights the main eighteen indicators characterizing the state of the demographic, natural, industrial, economic, innovative, technological and social spheres of life. For the formation of the information-logical model of the Arctic zone of the Russian Federation, these indicators were grouped into two main blocks: a block of environmental and socio-economic spheres of human activity in the Arctic.

Journal ArticleDOI
TL;DR: Two-step nonparametric estimator for a triangular simultaneous equation model is employed that is derived with consistency and asymptotic normality outcomes as well as optimal convergence rates to analyze the responses of the market to structural changes.
Abstract: The global nickel market is being modeled using a triangular simultaneous equations model. The model is constructed to analyze the different situations of the market development. The content and nature of the global nickel market is examined to identify the characteristics that are needed to build the model. In this paper, two-step nonparametric estimator for a triangular simultaneous equation model is employed. The diminished form and the corresponding residuals are estimated non-parametrically in the first step. The research includes the estimation of the structural equation using the nonparametric regression with the diminished form residuals that is considered as the second step. The estimator is derived with consistency and asymptotic normality outcomes as well as optimal convergence rates. Moreover, the model is used to analyze the responses of the market to structural changes.

Journal ArticleDOI
TL;DR: An attempt was made to fuse cutting force, tool wear and displacement of tool vibration along with the cutting speed, feed and depth of cut to predict the surface roughness of hardened SS 410 steel using a multicoated hard metal insert with a sculptured rake face.
Abstract: In manufacture sector, the surface finish quality has considerable importance that can affect the functioning of a component, and possibly its cost. The surface quality is an significant parameter to evaluate the productivity of machine tools as well as machined components. It is also used as the critical quality indicator for the machined surface. In recent years the prediction of surface roughness has become an area of interest for machining industry. Cutting force, cutting temperature, tool wear, and vibration signals are some of the factors that can be used individually to predict surface roughness, but when it is used collectively a more accurate prediction of surface roughness is possible since each of the above-mentioned factors have their own characteristics effects of surface roughness. In the present study, an attempt was made to fuse cutting force, tool wear and displacement of tool vibration along with the cutting speed, feed and depth of cut to predict the surface roughness of hardened SS 410 steel (45 HRC) using a multicoated hard metal insert with a sculptured rake face. Regression models and an artificial neural network model were developed to fuse the cutting force, cutting temperature, tool wear and displacement of tool vibration to predict the surface roughness. From the results it was observed that the prediction of surface roughness by the artificial neural network had a higher accuracy than the regression model.

Journal ArticleDOI
TL;DR: HGAPSO is introduced which combines fascinating properties of GA and PSO and Iterative process of GA is used by this hybrid approach after fixing initial best population from PSO to solve RAP.
Abstract: Redundancy allocation problem (RAP) is a non-linear programming problem which is very difficult to solve through existing heuristic and non-heuristic methods. In this research paper, three algorithms namely heuristic algorithm (HA), constraint optimization genetic algorithm (COGA) and hybrid genetic algorithm combined with particle swarm optimization (HGAPSO) are applied to solve RAP. Results obtained from individual use of genetic algorithm (GA) and particle swarm optimization (PSO) encompass some shortcomings. To overcome the shortcomings with their individual use, HGAPSO is introduced which combines fascinating properties of GA and PSO. Iterative process of GA is used by this hybrid approach after fixing initial best population from PSO. The results obtained from HA, COGA and HGAPSO with respect to increase in reliability are 50.76, 47.30 and 62.31 respectively and results with respect to CPU time obtained are 0.15, 0.209 and 3.07 respectively as shown in Table 3 of this paper. COGA and HGAPSO are programmed by Matlab.

Journal ArticleDOI
TL;DR: A reliable, more accurate and efficient model based on the statistical analysis of the sensor based data for occupancy detection is proposed and it is observed that proposed model perform better in terms of efficiency and accuracy over existing literature work.
Abstract: Context-aware computing is a growing research domain in present circumstances due to technological advancements in the area of sensors technology, big data, artificial intelligence and robotics and automation. It has many applications for making the daily life of human beings sustainable, comfortable, and smooth. Context ware computing also includes ambient intelligence and applications such as occupancy detection, prediction, user recognition etc. Occupancy detection and recognition help in developing intelligent applications which help the energy management, intelligent decision making, that results in cost reduction and fault and failure prevention of services and products in advance. Several studies have been conducted to detect the occupancy with a different set of methodologies and approaches using varying types of data such as environmental parameters, image and video-based attributes, wireless or sensor based parameters, and noise-based parameters. This paper proposes a reliable, more accurate and efficient model based on the statistical analysis of the sensor based data for occupancy detection. Detailed quantification of the relationship of the ambient attributes is presented and the ensemble model is developed based on machine learning technique extreme learning machine to achieve the significant level of improvement in accuracy, efficiency, generalization and reliability. In addition to this, the paper also proposes one online and adaptive model-based online sequential extreme learning machine to perform occupancy detection on real-time data when complete data is not available and learning is done with recent data points coming in the form of streams. Results are compared with existing work in the domain and it is observed that proposed model perform better in terms of efficiency and accuracy over existing literature work.

Journal ArticleDOI
TL;DR: The results of this application show that the proposed approach can serve as a viable and practical means of capturing and analyzing interdependencies among projects within a portfolio.
Abstract: Despite academics’ and practitioners’ increasing focus on project portfolio management over the last three decades, very few methods and tools have been proposed for modeling and analyzing interdependencies among projects within a portfolio. This paper proposes the use of a novel approach that integrates three techniques: social network analysis (SNA), fuzzy technique for order of preference by similarity to ideal solution (TOPSIS), and cross-impact matrix multiplication applied to classification (MICMAC). Network mapping provides project managers with a holistic view of interdependencies among projects; fuzzy TOPSIS MICMAC and SNA measures are used to classify projects in terms of their driving power and dependence power and out- and in-degree centrality. This categorization offers a helpful tool for project managers to distinguish among projects and classify them based on their interdependency levels, and thus, aids in identifying critical projects. For the demonstration, the approach is applied to model and analyze interdependencies between projects within a real-life portfolio from the industry. The results of this application show that the proposed approach can serve as a viable and practical means of capturing and analyzing interdependencies among projects within a portfolio.

Journal ArticleDOI
TL;DR: Results indicate that applying boundary conditions reduce the computational time to solve the case study problem as well as minimizes the number of stations, reduces idle time and reduces the length of the assembly line for the MTALBP.
Abstract: Mixed model two-sided assembly line balancing problem (MTALBP) is realized in plants delivering a high volume of big sizes products such as cars or trucks to increase space utilization. In MTALBP there is a procedure of introducing two single stations in each position left and right of the assembly line for the combined product model. In this paper, the proposed objective function is to maximize the workload at each station such that the number of stations is minimized. Since the problem is well known as NP hard, benchmark problems of MTALBP are solved using branch and bound algorithm on Lingo 16 solver. The proposed mathematical model is solved with benchmark test problems mentioned in research papers and applied to solve the case study problem of a turbocharger assembly line plant. The experimental results of the case study problem show that line efficiency is obtained 86.50% for model A and 80.75% for model B and the number of single and mated stations of the assembly line is close to the theoretical minimum number of stations. Results indicate that applying boundary conditions reduce the computational time to solve the case study problem as well as minimizes the number of stations, reduces idle time and reduces the length of the assembly line for the MTALBP.