scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Systems Assurance Engineering and Management in 2021"


Journal ArticleDOI
TL;DR: In this work, the factorization method is used to establish a 3-D dynamic simulation structure of the human body when it is moving, and an authenticity coefficient of 0.86 is achieved by the proposed approach outperforming the other state-of-the-art algorithms.
Abstract: When athletes perform different sports, the angles of joints and the speed of movement are different, which causes the feature capture points to not correspond to the key areas well. This results in forming feature messy correspondence errors, and affecting the accuracy of shape basis calculation. The traditional 3D reconstruction algorithms for sports images are affected by the messy profile, and it is difficult to counter the accuracy of later modeling. Therefore, an error control mechanism in the virtual modernization of random sports image collected points is recommended for the complex systems. In this work, the factorization method is used to establish a 3-D dynamic simulation structure of the human body when it is moving. The stolt transformation is implemented for adjusting the azimuth offset rate of all image capturing regions, such that it cannot generate excessive errors during feature matching. The large messy residual error is used for third-order signal compensation for realizing the 3-D dynamic simulation structure in the humanoid signal image sequence. The observed outcomes shows that this approach improves the authenticity of 3-D dynamic simulation of human signal image classifications. Using 150 images and 915 key points, an authenticity coefficient of 0.86 is achieved by the proposed approach outperforming the other state-of-the-art algorithms.

69 citations


Journal ArticleDOI
TL;DR: This article presents a state of the art review of the GWO algorithm, its progress, and applications in more complex real-world problem-solving.
Abstract: From the solitudinarian era to the present, the human race has been striving towards the betterment of his life by trying to find out the hidden secrets of our nature. Some time back one would hardly think that colonies of ant, pack of grey wolves, and elephants would be used to design an optimization algorithm. One of the optimization techniques called Grey Wolf Optimization (GWO) algorithm is motivated by the socio-hierarchical behaviour of the animal named Canis Lupus (Grey Wolf). In this paper, the detailed description of GWO is presented along with different development in standard GWO and its applications. Precisely, this article presents a state of the art review of the GWO algorithm, its progress, and applications in more complex real-world problem-solving.

43 citations


Journal ArticleDOI
TL;DR: In this article, a performance index system for university social science research based on BP neural network and the relevant theoretical knowledge is utilized to construct a university social sciences research performance evaluation model, which shows that the difference between the predicted value of each sample and its expected output value is not large, and the value of the prediction error is also relatively small, all less than 1.
Abstract: Higher education in my country needs to focus on the cultivation of innovative talents, independent innovation, technological development, cultural innovation, and the promotion of scientific and technological knowledge. This paper proposes a performance index system for university social science research based on BP neural network and the relevant theoretical knowledge is utilized to construct a university social science research performance evaluation model. The results show that the difference between the predicted value of each sample and its expected output value is not large, and the value of the prediction error is also relatively small, all less than 1. In this paper, the performance evaluation method of social science research in colleges and universities based on BP neural network is an evaluation method with high efficiency, strong operability and high accuracy. Therefore, the BP neural network model is utilized to evaluate and optimize the performance of social science research in colleges and universities. The established BP neural network model has very low error value and good generalization ability, which effectively proves that the training sample data can fit the neural network simulation ideally. In the same way, it shows that the output value of BP neural network can be very close to the input vector.

35 citations


Journal ArticleDOI
TL;DR: In this article, a low budget alternative solution for intelligent grading and sorting of apple fruit employing the deep learning-based approach was proposed. But the results show that the sorting image recognition system can successfully sort apples according to the perimeter characteristics.
Abstract: Manual sorting of fruits was considered as a significant challenging for agricultural sector as it is a laborious task and may also lead to inconsistency in the classification. In order to improve the apple sorting efficiency and realize the non-destructive testing of apple, the machine vision technology integrated with artificial intelligence was introduced in this article for the design of apple sorting system. This article provides a low budget alternative solution for intelligent grading and sorting of apple fruit employing the deep learning-based approach. The automatic grading of apple was realized according to the determined apple grading standard by applying various stages of artificial intelligence platform like grayscale processing, binarization, enhancement processing, feature extraction and so on. The proposed end-to-end low-cost machine vision system provides an automated sorting of apple and significantly reduces the labor cost and provides a time-effective solution for medium and large-scale enterprises. In order to verify the feasibility of the scheme, the image recognition system of apple sorting machine is tested and the average accuracy of 99.70% is achieved while observing the recognition accuracy 99.38% for the CNN based apple sorting system. The results show that the sorting image recognition system can successfully sort apples according to the perimeter characteristics. It realizes the non-destructive testing and grade classification of apple and provides an important reference value for the research and development of fruit automatic sorting system.

33 citations


Journal ArticleDOI
TL;DR: This work proposes a machine learning-based healthcare model for accurate and early detection of diabetics and shows few relevant features are needed to enhance the accuracy of the developed model.
Abstract: Diabetes is a chronic hyperglycemic disorder. Every year hundreds of millions of people around the world have diabetes. The presence of irrelevant features and an imbalanced dataset are significant issues to train the model. The availability of patient medical records quantifies symptoms, body characteristics, and clinical laboratory test values that can be used in the study of biostatistics aimed at identifying patterns or characteristics that cannot be detected by current practice. This work proposes a machine learning-based healthcare model for accurate and early detection of diabetics. Five machine learning classifiers such as logistic regression, K-nearest neighbor, Naive Bayes, random forest, and support vector machine are used. Fast correlation-based filter feature selection is used to remove the irrelevant features. The synthetic minority over-sampling technique is used to balance the imbalanced dataset. The model is evaluated with four performance measuring matrices: accuracy, sensitivity, specificity, and area under the curve (AUC). An experimental outcome shows few relevant features are needed to enhance the accuracy of the developed model. The RF classifier achieves the highest accuracy, sensitivity, specificity, and AUC of 97.81%, 99.32%, 98.86%, and 99.35%.

29 citations


Journal ArticleDOI
TL;DR: In this paper, a conceptual model of PCP collaborative design is established to determine the accuracy of BIM model in different design stages, and the results show that the effectiveness of the building collaborative design method based on BIM is verified through an example.
Abstract: The popularization and maturity of BIM technology provide a new method and platform for the realization of collaborative design of prefabricated buildings. At present, BIM applications in prefabricated buildings are mostly stays in the stage of construction and production. Prefabricated buildings are the best choice for the transformation and upgrading of construction industry because of their high efficiency and small impact on the environment. Prefabricated building is not only a building form, but also a complex system engineering, which needs to use the methods and ideas of system engineering to solve practical problems. In order to promote the application of BIM in prefabricated building design, this paper compares BIM collaborative design with traditional design methods. Based on the requirements of BIM technology IDM, a conceptual model of PCP collaborative design is established to determine the accuracy of BIM model in different design stages. Parametric design tools dynamo and structural precast for revit were used to split prefabricated components, configure steel bars and analyze the structure of prefabricated buildings. The results show that the effectiveness of the building collaborative design method based on BIM is verified through an example.

26 citations


Journal ArticleDOI
TL;DR: The novel DNL method successfully ranks and handles optimal container selection according to the dynamic data involvement and provides efficient solutions for data processing.
Abstract: In Big data computing domains with a huge network of connected devices involved in various internet and social network concerns mainly for security, integrity, authentication and data privacy. Allocation and efficient usage of containers provided by cloud service providers has huge impact over efficient data processing and data handling. Batch processing method emphasizes huge databases by letting it into the programmable domains and segregate in accordance with their size, reliability, processing speed and required memory space. Whereas, Stream processing involves scrutinizing data promptly before entering into the stream and scrutinizing will be done accordingly. Container selection plays a major role in such processing methodologies and promptly makes the effective resource scheduling possible and efficient in cloud service providing. In our proposed method, the Guided Container Selection (GCS) process eradicates the bottle neck problem by selecting an efficient and optimal container which satisfies all requirements like required size, reliability, processing speed etc. Implementing either batch processing or stream processing to analyze solutions for multiple domain container selection which will be analyzed and resolved through Deep Neural Learning (DNL). The novel DNL method successfully ranks and handles optimal container selection according to the dynamic data involvement and provides efficient solutions for data processing.In future it also helps selecting appropriate containers for similar requirements for cloud service providers and also for its consumers.

25 citations


Journal ArticleDOI
TL;DR: In this article, the authors provided an elaborate discussion about the education sector's impact during a disease outbreak in India and offered a detailed discussion regarding how India adopts the e-learning approach in this critical situation.
Abstract: Education institutions like Schools, colleges, and universities in India are currently based on traditional learning methods and follow the conventional setting of face-to-face interaction/lectures in a classroom. Most of the academic sector started unified learning, still most of them struct with old steps. The unexpected Plague of a deadly infection called COVID-19 caused by (SARS-Cov-2) trembled the whole world. The WHO announced it as a disease outbreak. This circumstance challenged the whole education system worldwide and compelled educators to change to an online mode immediately. Many educational organizations that were earlier unwilling to change their traditional didactic practice had no choice but to move exclusively to online teaching–learning. This article provides an elaborate discussion about the education sector's impact during a disease outbreak in India. It offers a detailed discussion regarding how India adopts the e-learning approach in this critical situation. Further, it describes how to cope with the challenges related to e-learning.

25 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the numerical simulation of air distribution of central air conditioning in tall atrium using the CFD technology to simulate the air distribution in the atrium of the large hotel buildings.
Abstract: In the modern construction industry, there is a need for environment friendly energy efficient buildings to support the idea of sustainability. This article investigates the numerical simulation of air distribution of central air conditioning in tall atrium using the CFD technology to simulate the air distribution in the atrium of the large hotel buildings. The optimal atrium design is achieved by numerical simulation of air distribution condition in large Atrium by checking the airflow velocity field as well as temperature field under different working conditions in summers. The precondition of fixed air volume was analyzed using the FLUENT software and change in the vent air supply perspective was realized. The airflow velocity field and temperature field were evaluated under different working conditions and the flow characteristics of lateral line 1-point temperature were compared between 300.5 and 301 K. The rest of the measuring point temperature fluctuates up and down at 300 K, line 2 measure point temperature between 300.5 and 301 K. The hotel atrium was tested on site and the measured value was taken as the initial parameter for numerical simulation. The results of simulation and measurement were compared and analyzed to verify the effectiveness and reliability of the simulated air distribution in large space buildings.

24 citations


Journal ArticleDOI
TL;DR: In this article, a particle swarm optimization neural network was used to track the change rules of popular tourist attractions, and the prediction accuracy of tourist attractions is better than that of the traditional model.
Abstract: With the increase in the tourism industry, the experience of the tourist is changed dramatically. Mainly, there are two types of sustainability in the tourism industry. One is for the sustainable destination environment and other is for the sustainable tourists’ experience. For the tourists’ sightseeing recommendation, an invulnerable system is needed to the site based on the current ground conditions. The tourist volume prediction of tourist attractions in tourism research has always been an interesting topic and one of the difficult problems faced by the tourism field. The RBF neural network algorithm was utilized to the parameters optimization and the popular tourist spots prediction model was established to study and predict popular tourist spots, which was compared with the traditional prediction model. The results show that the particle swarm optimization neural network can better track the change rules of popular tourist attractions, and the prediction accuracy of popular tourist attractions is obviously better than that of the traditional model. The tourist attractions prediction efficiency is also higher, which can meet the requirements of online prediction of popular tourist attractions. The training time and prediction time of the prediction model of popular tourist attractions are reduced which speeds up the modeling efficiency of the popular tourist attractions prediction and allows online prediction of popular tourist attractions.

24 citations


Journal ArticleDOI
TL;DR: Development of a bio-medical non-invasive device is presented for recognition and measurement of Sleep Apnea, which generates mild impulse of current which is sufficient to make the patient awake and save them from a possible fatality (mid sleep death).
Abstract: A condition during the sleep of a person, in which a period of a hitch or pause in breathing occurs is termed as “Sleep Apnea”, which is a very common but potentially serious sleep disorder. In this paper, development of a bio-medical non-invasive device is presented for recognition and measurement of this disorder. The blood pressure and heartbeat patterns are obtained from various sensors by regularly monitoring the changes in them and when a dip in the relevant monitoring values is observed, the proposed device generates mild impulse of current which is sufficient to make the patient awake and save them from a possible fatality (mid sleep death). A basic user-friendly mobile application has also been developed for the purpose of monitoring by a remote user also.

Journal ArticleDOI
TL;DR: In this article, the authors used canny algorithm to process edge detection of text, and k-means algorithm for cluster pixel recognition to improve the accuracy of image text recognition effectively.
Abstract: The latest research in the field of recognition of image characters has led to various developments in the modern technological works for the improvement of recognition rate and precision. This technology is significant in the field of character recognition, business card recognition, document recognition, vehicle license plate recognition etc. for smart city planning, thus its effectiveness should be improved. In order to improve the accuracy of image text recognition effectively, this article uses canny algorithm to process edge detection of text, and k-means algorithm for cluster pixel recognition. This unique combination combined with maximally stable extremal region and optimization of stroke width for image text yields better results in terms of recognition rate, recall, precision, F-score and accuracy. The results show that the correct recognition rate is 88.3% and 72.4% respectively with an accuracy value of 90.5% for the proposed method. This algorithm has high image text recognition rate, can recognize images taken in complex environment, and has good noise removal function. It is significantly an optimal algorithm for image text recognition.

Journal ArticleDOI
TL;DR: An overview of the some popular weighing methods applicable to the MCDM process is provided and the performance of these methods is shown through a case study.
Abstract: The Decision-making is undeniable and is an integral part of in almost all the processes either in the complex form or as a simple procedure. It often refers to the prioritizing (ranking) the alternatives based on several conflicting criteria. To ensure that the process of decision making runs smoothly with minimum errors, multiple criteria decision-making abbreviated as "MCDM" is used for obtaining the solution. The present work emphasized on the weighing methods, an important aspect in MCDM methods, that accurately determines the relative importance of each criterion. The relative importance of each criterion is determined by a set of preferences, called weights, represented between 0 and 1. The weights of criteria influence the outcome of any decision-making process, so it is essential to highlight the significance of weighing methods in determining the criteria preference. In literature, researchers have reported various weighing methods for calculating the relative weights of criteria used for ranking the alternatives. The present study, provides an overview of the some popular weighing methods applicable to the MCDM process and also shows the performance of these methods through a case study.

Journal ArticleDOI
TL;DR: A hybrid GA-ABC which represents a genetic based artificial bee colony algorithm for feature-selection and classification using classifier ensemble techniques and shows increase in the classification accuracy by obtaining more than 90% when compared to the other feature selection methods.
Abstract: The diagnosis of heart disease is found to be a serious concern, so the diagnosis has to be done remotely and regularly to take the prior action. In the present world finding the prevalence of heart disease has become a key research area for the researchers and many models have been proposed in the recent year. The optimization algorithm plays a vital role in heart disease diagnosis with high accuracy. Important goal of this work is to develop a hybrid GA-ABC which represents a genetic based artificial bee colony algorithm for feature-selection and classification using classifier ensemble techniques. The ensemble classifier consists of four algorithms like support vector machine, random forest, Naive Bayes, and decision tree. From the obtained results, the proposed model GA-ABC-EL shows increase in the classification accuracy by obtaining more than 90% when compared to the other feature selection methods.

Journal ArticleDOI
TL;DR: In this paper, a novel filtering technique (FILTER) is proposed for effective defect prediction using SVMs, which enhances the performance of SVM based SDP model by 16.73, 16.80% and 7.65% in terms of accuracy, AUC and F-measure respectively.
Abstract: Software defect prediction (SDP) plays a key role in the timely delivery of good quality software product. In the early development phases, it predicts the error-prone modules which can cause heavy damage or even failure of software in the future. Hence, it allows the targeted testing of these faulty modules and reduces the total development cost of the software ensuring the high quality of end-product. Support vector machines (SVMs) are extensively being used for SDP. The condition of unequal count of faulty and non-faulty modules in the dataset is an obstruction to accuracy of SVMs. In this work, a novel filtering technique (FILTER) is proposed for effective defect prediction using SVMs. Support vector machine (SVM) based classifiers (linear, polynomial and radial basis function) are designed utilizing the proposed filtering technique over five datasets and their performances are evaluated. The proposed FILTER enhances the performance of SVM based SDP model by 16.73%, 16.80% and 7.65% in terms of accuracy, AUC and F-measure respectively.

Journal ArticleDOI
TL;DR: In this paper, a detailed information of the different segmentation methods along with their merits and demerits is provided. And the quantitative analysis of existing techniques and the performance evaluation is done and detailed.
Abstract: In the medical image analysis, recognition of tumor in brain is very important task and it leads cancer which should be diagnosed at early stage. It is an irregular cell population in brain and for the cancer diagnosis; medical imaging techniques play an important role. The mostly used and efficient technique for the segmentation is Magnetic resonance imaging (MRI). There is huge progress the field of MRI imaging technique for accessing the brain injury and the brain anatomy exploring. The segmentation and detection of the tumor from the MRI images are done by the image processing techniques. Manual detection of brain tumor is the complex task, so the different image segmentation methods are developed for detection and segmentation of the tumor from the MRI images. The various recent brain tumor segmentation techniques are thoroughly discussed in this paper. The quantitative analysis of existing techniques and the performance evaluation is done and detailed. The paper revealed different image segmentation methods are briefly discussed. This survey article provides the detailed information of the different segmentation methods along with their merits and demerits. Effectiveness of the methods is shown in terms of the performance parameters.

Journal ArticleDOI
TL;DR: In this paper, a movie recommendation based on collaborative filtering and singular value decomposition plus-plus (SVD++) is proposed, which is compared with well-known machine learning approaches namely k nearest neighbor (K-NN), SVD and Co-clustering.
Abstract: The increasing demand for personalized information has resulted in the development of the Recommender System (RS). RS has been widely utilized and broadly studied to suggest the interests of users and make an appropriate recommendation. This paper gives an overview of several types of recommendation approaches based on user preferences, ratings, domain knowledge, users demographic data, users context and also lists the advantages and disadvantages of each RS approach. In this paper, we also proposed the movie recommendation based on collaborative filtering and singular value decomposition plus-plus (SVD++). The proposed approach is compared with well-known machine learning approaches namely k nearest neighbor (K-NN), singular value decomposition (SVD) and Co-clustering. The proposed approach is experimentally verified using MovieLens 100 K datasets and error of the RS is measured using Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The result shows that the proposed approach gives a lesser error rate with RMSE (0.9201) and MAE (0.7219). This approach also overcomes cold-start, data sparsity problems and provides them relevant items and services.

Journal ArticleDOI
TL;DR: This paper is eliminating the need for generating a severity score for software vulnerabilities by using the description of a vulnerability for their prioritization by using word embedding and convolution neural network.
Abstract: Whenever a vulnerability is detected by the testing team, it is described based on its characteristics and a detailed overview of the vulnerability is given by the testing team. Usually, there are certain features or keywords that points towards the possible severity level of a vulnerability. Using these keywords in the vulnerability description, a possible estimation of the severity level of vulnerabilities can be given just by their description. In this paper, we are eliminating the need for generating a severity score for software vulnerabilities by using the description of a vulnerability for their prioritization. This study makes use of word embedding and convolution neural network (CNN). The CNN is trained with sufficient samples vulnerability descriptions from all the categories, so that it can capture discriminative words and features for the categorization task. The proposed system helps to channelize the efforts of the testing team by prioritizing the newly found vulnerabilities in three categories based on previous data. The dataset includes three data samples from three different vendors and two mixed vendor data samples.

Journal ArticleDOI
TL;DR: The KCSS presented in this study provides multicriteria node selection based on artificial intelligence in terms of decision making systems thereby giving the scheduler a broad picture of the cloud's condition and the user's requirements.
Abstract: Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. This study presents Kubernetes Container Scheduling Strategy (KCSS) based on Artificial Intelligence (AI) that can assist in decision making to control the scheduling and shifting of load to nodes. The aim is to improve the container’s schedule requested digitally from users to enhance the efficiency in scheduling and reduce cost. The constraints associated with the existing container scheduling techniques which often assign a node to every new container based on a personal criterion by relying on individual terms has been greatly improved by the new system presented in this study. The KCSS presented in this study provides multicriteria node selection based on artificial intelligence in terms of decision making systems thereby giving the scheduler a broad picture of the cloud's condition and the user's requirements. AI Scheduler allows users to easily make use of fractional Graphics Processing Units (GPUs), integer GPUs, and multiple-nodes of GPUs, for distributed training on Kubernetes.

Journal ArticleDOI
TL;DR: Using the J48 machine learning algorithm, up to 96% of accurate result in vulnerability detection was achieved and the statistical parameters like ROC curve, Kappa statistics; Recall, Precision, etc have been used for analyzing the result.
Abstract: Software quality is the prime solicitude in software engineering and vulnerability is one of the major threat in this respect. Vulnerability hampers the security of the software and also impairs the quality of the software. In this paper, we have conducted experimental research on evaluating the utility of machine learning algorithms to detect the vulnerabilities. To execute this experiment; a set of software metrics was extracted using machine learning in the form of easily accessible laws. Here, 32 supervised machine learning algorithms have been considered for 3 most occurred vulnerabilities namely: Lawofdemeter, BeanMemberShouldSerialize,and LocalVariablecouldBeFinal in a software system. Using the J48 machine learning algorithm in this research, up to 96% of accurate result in vulnerability detection was achieved. The results are validated against tenfold cross validation and also, the statistical parameters like ROC curve, Kappa statistics; Recall, Precision, etc. have been used for analyzing the result.

Journal ArticleDOI
TL;DR: In this paper, a comparative analysis of A-star, Theta-Star, and Lazy Theta*-star path planning strategies is presented in a 3D environment, where two performance metrics are used namely computational time and path length.
Abstract: Finding a safe and optimum path from the source node to the target node, while preventing collisions with environmental obstacles, is always a challenging task. This task becomes even more complicated when the application area includes Unmanned Aerial Vehicle (UAV). This is because UAV follows an aerial path to reach the target node from the source node and the aerial paths are defined in 3D space. A* (A-star) algorithm is the path planning strategy of choice to solve path planning problem in such scenarios because of its simplicity in implementation and promise of optimality. However, A* algorithm guarantees to find the shortest path on graphs but does not guarantee to find the shortest path in a real continuous environment. Theta* (Theta-star) and Lazy Theta* (Lazy Theta-star) algorithms are variants of the A* algorithm that can overcome this shortcoming of the A* algorithm at the cost of an increase in computational time. In this research work, a comparative analysis of A-star, Theta-star, and Lazy Theta-star path planning strategies is presented in a 3D environment. The ability of these algorithms is tested in 2D and 3D scenarios with distinct dimensions and obstacle complexity. To present comparative performance analysis of considered algorithms two performance metrices are used namely computational time which is a measure of time taken to generate the path and path length which represents the length of the generated path.

Journal ArticleDOI
TL;DR: In this article, the leaves and seeds of Mangifera indica (Mango) was experimentally used due to its aroma and bitterness as an auxiliary to Hops in the production of light beer.
Abstract: Humulus lupulus (Hops) based commercial beer contain chemical substances that are hazardous to human health. An effective substitute to Hops will be a revolution in brewing industries. The main objective of this study was to replace Hops in beer production owing to its insalubrious nature and to develop a nutritious beer using vegetables and fruits. In this study, the leaves and seeds of Mangifera indica (Mango) was experimentally used due to its aroma and bitterness as an auxiliary to Hops. The Mango Based Light Beer (MBLB) was produced with mango as gruit, barley malt as the main source, beetroot as an adjunct, and Citrus reticulata as a seasoning. Finally, Orange flavored, MBLB was produced by lab-scale fermentation using Saccharomyces cerevisiae. MBLB has high nutritious substances (protein, carbohydrate, minerals, Total phenols, Flavonoids, Vit. C). GC–MS analysis reveals the presence of beneficial bioactive compounds like Maltol, 4H-Pyran-4-one, 2,3-dihydro-3,5-dihydroxy-6-methyl, 5-Hydroxymethylfurfural, Furan-2-carboxaldehude, 5-(1-piperidyly) in the Mango based light beer. The level of Total phenolics and Flavonoids was comparatively high than HBCB. The presence of harmful chemicals includes 1-Pentanol, Silanediol, Urea, Pyridine and dl-Threitol in the HBCB was also observed by GCMS. Due to its high Phenolic and Flavonoid contents, MBLB showed increased antioxidant properties. From this study, it was clear that the MBLB was found to be effective and nutritionally enhanced, in comparison with the HBCB. The higher antioxidant potential of MBLB further supports its nutritional significance.

Journal ArticleDOI
TL;DR: The proposed idea presents the use of Convoluted Neural Networks using Spatial Transformer Networks and lane detection in real time to increase the efficiency of autonomous vehicles.
Abstract: Recently, the amount of research in the field of self-driving cars has grown significantly with autonomous vehicles having clocked in more than 10 million miles, providing a substantial amount of data for use in training and testing. The most complex part of training is the use of computer vision for feature extraction and object detection in real-time. Much relevant research has been done on improving the algorithms in the area of image segmentation. The proposed idea presents the use of Convoluted Neural Networks using Spatial Transformer Networks and lane detection in real time to increase the efficiency of autonomous vehicles. The depth of the neural network will help in training vehicles and during the testing phase, the vehicles will learn to make decisions based on the training data. In case of sudden changes to the environment, the vehicle will be able to make decisions quickly to prevent damage or danger to lives. Along with lane detection, a self-driving car must also be able to detect traffic signs. The proposed approach uses the Adam Optimizer which runs on top of the LeNet-5 architecture. The LeNet-5 architecture is analyzed and compared with the Feed Forward Neural Network approach. The accuracy of the LeNet-5 architecture was found to be 97% while the accuracy of the Feed Forward Neural Network was 94%.

Journal ArticleDOI
TL;DR: This work optimized the problem of supplier selection and order allocation in a centralized supply chain using mixed-integer nonlinear programming models and risk reduction strategies and found that the simultaneous use of risk reduction strategy significantly reduces supply chain costs and increases its benefits.
Abstract: Supply chain managers have realized that competition between supply chains has replaced competition between companies. In addition, with increasing disruptions and uncertainty in planning, companies need to be able to make informed decisions at risk. Coordination in the resilient supply chain and appropriate selection of suppliers play a key role in risky situations. In previous research, mainly the impact of resilience strategies in the decentralized supply chain has been investigated and ignored the reliability of suppliers in the decision-making process. Therefore, we provide an effective framework for selecting reliable suppliers and order allocation, which increases the supply chain's benefits by considering the risk reduction strategies and coordination between the buyer and the supplier. Thus, we optimized the problem of supplier selection and order allocation in a centralized supply chain using mixed-integer nonlinear programming models and risk reduction strategies. These strategies are protected suppliers, back-up suppliers, reserving additional capacity, emergency stock, and geographical separation. Also, by considering the failure mode and effects analysis technique and the risk priority number constraint, suppliers' reliability has been considered. A numerical example is solved with the exact method. In addition, the application of the proposed models in a case study has been investigated by the Grasshopper optimization algorithm. Based on the sensitivity analysis results, we found that the simultaneous use of risk reduction strategies in the models significantly reduces supply chain costs and increases its benefits. Also, considering the reliability constraints causes supply chain managers to choose suppliers with more desirable reliability.

Journal ArticleDOI
TL;DR: An effective automated PCOS detection and classification system is proposed from the ultrasound images by analyzing the affected and unaffected cases and achieves better results based on the scores calculated from the aforementioned evaluation criteria and proves its efficiency.
Abstract: Polycystic ovary syndrome (PCOS) is hormone related health illness in women, commonly known as endocrine system disorder. It majorly affects during the child bearing periods in between the age of 15 to 44. This condition causes an imbalance in hormone production that leads to other problems such as irregular menstrual cycles, baldness, also linked with long term health issues such as heart disease and diabetes. Around 70 percent of the women all around the world with PCOS had not been diagnosed properly and most among them are unaware about the presence of the disease in their body. In this paper, an effective automated PCOS detection and classification system is proposed from the ultrasound images by analyzing the affected and unaffected cases. The pre-processing of input images were done with Gaussian low pass filter, multilevel thresholding for image segmentation, extraction of features through proposed GIST-MDR technique and PCOS classification with supervised machine learning algorithms. The proposed system attained better results based on the scores calculated from the aforementioned evaluation criteria and proves its efficiency. The proposed feature extraction GIST-MDR model produces accuracy of 93.82% for Support Vector Machine, 89.7% for Random Forest, 91.05% for Linear Discriminant Analysis, and 88.26% for Naive Bayes algorithm with its optimal features. This computer aided automated disease diagnosis system assists the medical practitioners on difficult situations to make accurate decisions over patient's condition.

Journal ArticleDOI
TL;DR: In this article, a research based on fuzzy nonlinear programming is proposed to improve the anti-collision warning system of road traffic, which can effectively reduce false alarm and improve the road traffic collision warning system.
Abstract: To improve the anti-collision warning system of road traffic, a research based on fuzzy nonlinear programming is proposed. People hope to know the accident in advance, and then take the corresponding protective measures to avoid accidents, to achieve the purpose of reducing the number of accidents. The specific content of this method is to establish a safety distance model to prevent rear-end collision. The following process can be divided into three situations: the leading vehicle is stationary, the leading vehicle is at uniform speed or accelerating speed, and the leading vehicle is decelerating. The mathematical model of the safe distance of overtaking are established respectively. The fuzzy mathematical theory is used to consider the influence of external environmental factors such as weather conditions, road condition, and vehicle speed. Determine the parameters involved in the model and the fuzzy relationship between some parameters and each influencing factor. The simulation model of vehicle anti-collision warning system is established by using fuzzy inference rules of some parameters, respectively, and the simulation test is conducted. The test results verify the rationality of the safety distance model and parameter setting. It can effectively reduce false alarm and improve the road traffic collision warning system.


Journal ArticleDOI
TL;DR: Findings suggest that the model with the insertion of volume agility effect performs better when compared with the one without the volume agility, and the sensitivity analysis unfolds valuable managerial insights for decision-makers.
Abstract: A sustainable production inventory model that addresses some of the pragmatic scenarios of the production process is developed. The production process is considered imperfect which results in the production of some imperfect quality items with known distribution. The imperfect products are managed through an efficient rework process that makes the items suitable to be vended at their original markup price only. The agile nature of the manufacturing process is considered. And, the energy usage and carbon emission in the production process is also considered. Undoubtedly, the fundamental nature of demand is sensitive to price. Thus, the model incorporates the demand to be price-contingent. The objective is to maximize the overall inventory turnover by conjointly optimizing the selling price, production rate, and production time. A numerical example is included to validate the model. Further, the sensitivity analysis unfolds valuable managerial insights for decision-makers. Findings suggest that the model with the insertion of volume agility effect performs better when compared with the one without the volume agility.

Journal ArticleDOI
TL;DR: Cat Swarm Optimization is explored here for detecting a possible partition in the network prior to its occurrence in the seeking mode of the algorithm, and would conserve energy consumption of nodes by reducing the possibilities of failed and retried transmissions.
Abstract: Underwater wireless sensor network is characterized with dynamic network topology owing to node mobility. Frequent changes in position of nodes due to water current cause network partitioning. This often results in frequent network failures and causes void spaces. Frequent network failures lead to unreliable data transmissions where nodes injudiciously drain their power resource. Such a network needs to adapt its network routing in order to diminish the challenges caused by node mobility. A dynamic approach addressing the issues of node mobility will also enhance the network lifetime. Node mobility creates articulation points (AP) in the network topology. AP is similar to bridges in a graph. AP leads to partitioning of the network that only yields failed and unreliable transmission. Cat Swarm Optimization is explored here for detecting a possible partition in the network prior to its occurrence in the seeking mode of the algorithm. In the tracking mode of the algorithm the cat with the closest distance to the predicted AP is selected to move towards the predicted AP in order to avoid the network from partitioning. The proposed approach enhances the network lifetime as it avoids disconnections and failed transmissions. It would conserve energy consumption of nodes by reducing the possibilities of failed and retried transmissions.

Journal ArticleDOI
Ishwarappa1, J. Anuradha1
TL;DR: The simulation results show that the proposed deep CNN with reinforcement-LSTM model gives better performance in terms of various metrics such as POCID obtains more than 85%, $$\hbox {R}^2$$ more than 80%, ARV by less than 0.024%, and MAPE is lesser than0.04% when compared with other existing techniques.
Abstract: The exact prediction of stock future prices are impossible due to complexity and uncertainty related with the stock data. An effective prediction system is required for the successful analysis of future price of stocks for every company. It is more complex for the researchers to analyze the large stock future prices for obtaining better accuracy. For this reason, a deep CNN with reinforcement-LSTM model is proposed for forecasting stock future prices based on big data. Furthermore, four real-time stock future prices such as NASDAQ, FTSE, TAIEX, and BSE are used for analyzing the efficiency of the proposed deep CNN with reinforcement-LSTM model. The models performance is evaluated conducting different experiments like 1-month ahead, 1-week ahead, and 1-day ahead. In a consecutive year, all working days data is collected and conducted the experiments based on proposed model. The simulation results show that the proposed model gives better performance in terms of various metrics such as POCID obtains more than 85%, $$\hbox {R}^2$$ more than 80%, ARV by less than 0.024%, and MAPE is lesser than 0.04% when compared with other existing techniques.