scispace - formally typeset
Search or ask a question
Author

Stephen J. Nuagah

Bio: Stephen J. Nuagah is an academic researcher. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 1, co-authored 2 publications receiving 1 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A new image compression scheme called the GenPSOWVQ method that uses a recurrent neural network with wavelet VQ that attains precise compression while maintaining image accuracy with lower computational costs when encoding clinical images.
Abstract: Medical diagnosis is always a time and a sensitive approach to proper medical treatment. Automation systems have been developed to improve these issues. In the process of automation, images are processed and sent to the remote brain for processing and decision making. It is noted that the image is written for compaction to reduce processing and computational costs. Images require large storage and transmission resources to perform their operations. A good strategy for pictures compression can help minimize these requirements. The question of compressing data on accuracy is always a challenge. Therefore, to optimize imaging, it is necessary to reduce inconsistencies in medical imaging. So this document introduces a new image compression scheme called the GenPSOWVQ method that uses a recurrent neural network with wavelet VQ. The codebook is built using a combination of fragments and genetic algorithms. The newly developed image compression model attains precise compression while maintaining image accuracy with lower computational costs when encoding clinical images. The proposed method was tested using real-time medical imaging using PSNR, MSE, SSIM, NMSE, SNR, and CR indicators. Experimental results show that the proposed GenPSOWVQ method yields higher PSNR SSIMM values for a given compression ratio than the existing methods. In addition, the proposed GenPSOWVQ method yields lower values of MSE, RMSE, and SNR for a given compression ratio than the existing methods.

65 citations

Journal ArticleDOI
TL;DR: The IntOPMICM technique is introduced, a new image compression scheme that combines GenPSO and VQ that produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data.
Abstract: Due to the increasing number of medical imaging images being utilized for the diagnosis and treatment of diseases, lossy or improper image compression has become more prevalent in recent years. The compression ratio and image quality, which are commonly quantified by PSNR values, are used to evaluate the performance of the lossy compression algorithm. This article introduces the IntOPMICM technique, a new image compression scheme that combines GenPSO and VQ. A combination of fragments and genetic algorithms was used to create the codebook. PSNR, MSE, SSIM, NMSE, SNR, and CR indicators were used to test the suggested technique using real-time medical imaging. The suggested IntOPMICM approach produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data. Furthermore, for a given compression ratio, the suggested IntOPMICM approach produces lower MSE, RMSE, and SNR values than existing methods.

65 citations

Journal ArticleDOI
TL;DR:
Abstract: Traditionally, nonlinear data processing has been approached via the use of polynomial filters, which are straightforward expansions of many linear methods, or through the use of neural network techniques. In contrast to linear approaches, which often provide algorithms that are simple to apply, nonlinear learning machines such as neural networks demand more computing and are more likely to have nonlinear optimization difficulties, which are more difficult to solve. Kernel methods, a recently developed technology, are strong machine learning approaches that have a less complicated architecture and give a straightforward way to transforming nonlinear optimization issues into convex optimization problems. Typical analytical tasks in kernel-based learning include classification, regression, and clustering, all of which are compromised. For image processing applications, a semisupervised deep learning approach, which is driven by a little amount of labeled data and a large amount of unlabeled data, has shown excellent performance in recent years. For their part, today’s semisupervised learning methods operate on the assumption that both labeled and unlabeled information are distributed in a similar manner, and their performance is mostly impacted by the fact that the two data sets are in a similar state of distribution as well. When there is out-of-class data in unlabeled data, the system’s performance will be adversely affected. When used in real-world applications, the capacity to verify that unlabeled data does not include data that belongs to a different category is difficult to obtain, and this is especially true in the field of synthetic aperture radar image identification (SAR). Using threshold filtering, this work addresses the problem of unlabeled input, including out-of-class data, having a detrimental influence on the performance of the model when it is utilized to train the model in a semisupervised learning environment. When the model is being trained, unlabeled data that does not belong to a category is filtered out by the model using two different sets of data that the model selects in order to optimize its performance. A series of experiments was carried out on the MSTAR data set, and the superiority of our method was shown when it was compared against a large number of current semisupervised classification algorithms of the highest level of sophistication. This was especially true when the unlabeled data had a significant proportion of data that did not fall into any of the categories. The performance of each kernel function is tested independently using two metrics, namely, the false alarm (FA) and the target miss (TM), respectively. These factors are used to calculate the proportion of incorrect judgments made using the techniques.

39 citations

Journal ArticleDOI
TL;DR: The AIoT-H application is likely to be explored in this research article due to its potential to aid with existing and different technologies, as well as bring useful solutions to healthcare security challenges.
Abstract: A significant study has been undertaken in the areas of health care and administration of cutting-edge artificial intelligence (AI) technologies throughout the previous decade. Healthcare professionals studied smart gadgets and other medical technologies, along with the AI-based Internet of Things (IoT) (AIoT). Connecting the two regions makes sense in terms of improving care for rural and isolated resident individuals. The healthcare industry has made tremendous strides in efficiency, affordability, and usefulness as a result of new research options and major cost reductions. This includes instructions (AIoT-based) medical advancements can be both beneficial and detrimental. While the IoT concept undoubtedly offers a number of benefits, it also poses fundamental security and privacy concerns regarding medical data. However, resource-constrained AIoT devices are vulnerable to a number of assaults, which can significantly impair their performance. Cryptographic algorithms used in the past are inadequate for safeguarding IoT-enabled networks, presenting substantial security risks. The AIoT is made up of three layers: perception, network, and application, all of which are vulnerable to security threats. These threats can be aggressive or passive in nature, and they can originate both within and outside the network. Numerous IoT security issues, including replay, sniffing, and eavesdropping, have the ability to obstruct network communication. The AIoT-H application is likely to be explored in this research article due to its potential to aid with existing and different technologies, as well as bring useful solutions to healthcare security challenges. Additionally, every day, several potential problems and inconsistencies with the AIoT-H technique have been discovered.

35 citations

Journal ArticleDOI
TL;DR: In this article , a fully automated system was developed that identifies the gender of humans and age based on digital images of teeth using multiclass SVM (MSVM) classifier algorithm for age estimation and LIBSVM classifier for gender prediction.
Abstract: The use of digital medical images is increasing with advanced computational power that has immensely contributed to developing more sophisticated machine learning techniques. Determination of age and gender of individuals was manually performed by forensic experts by their professional skills, which may take a few days to generate results. A fully automated system was developed that identifies the gender of humans and age based on digital images of teeth. Since teeth are a strong and unique part of the human body that exhibits least subject to risk in natural structure and remains unchanged for a longer duration, the process of identification of gender- and age-related information from human beings is systematically carried out by analyzing OPG (orthopantomogram) images. A total of 1142 digital X-ray images of teeth were obtained from dental colleges from the population of the middle-east part of Karnataka state in India. 80% of the digital images were considered for training purposes, and the remaining 20% of teeth images were for the testing cases. The proposed gender and age determination system finds its application widely in the forensic field to predict results quickly and accurately. The prediction system was carried out using Multiclass SVM (MSVM) classifier algorithm for age estimation and LIBSVM classifier for gender prediction, and 96% of accuracy was achieved from the system.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This study targets systematic review of automated diagnosis for heart disease prediction based on different types of modalities, i.e., clinical feature-based data modality, images, and ECG, and critically evaluates the previous methods and presents the limitations in these methods.
Abstract: One of the leading causes of deaths around the globe is heart disease. Heart is an organ that is responsible for the supply of blood to each part of the body. Coronary artery disease (CAD) and chronic heart failure (CHF) often lead to heart attack. Traditional medical procedures (angiography) for the diagnosis of heart disease have higher cost as well as serious health concerns. Therefore, researchers have developed various automated diagnostic systems based on machine learning (ML) and data mining techniques. ML-based automated diagnostic systems provide an affordable, efficient, and reliable solutions for heart disease detection. Various ML, data mining methods, and data modalities have been utilized in the past. Many previous review papers have presented systematic reviews based on one type of data modality. This study, therefore, targets systematic review of automated diagnosis for heart disease prediction based on different types of modalities, i.e., clinical feature-based data modality, images, and ECG. Moreover, this paper critically evaluates the previous methods and presents the limitations in these methods. Finally, the article provides some future research directions in the domain of automated heart disease detection based on machine learning and multiple of data modalities.

32 citations

Journal ArticleDOI
TL;DR: The proposed routing protocol adaptively tunes the height and opening of the cone based on the network structure to effectively improve the performance of the network and significantly reduces energy tax, end-to-end delay, and packet delivery ratio.
Abstract: In the recent past, a significant increase has been observed in the use of underwater wireless sensor networks for aquatic applications. However, underwater wireless sensor networks face several ch...

20 citations

Proceedings ArticleDOI
16 Oct 2022
TL;DR: In this article , the authors proposed a trust-based safe routing protocol with the goal of mitigating the interference of black hole nodes in the course of routing in mobile ad-hoc networks.
Abstract: As a result of the inherent weaknesses of the wireless medium, ad hoc networks are susceptible to a broad variety of threats and assaults. As a direct consequence of this, intrusion detection, as well as security, privacy, and authentication in ad-hoc networks, have developed into a primary focus of current study. This body of research aims to identify the dangers posed by a variety of assaults that are often seen in wireless ad-hoc networks and provide strategies to counteract those dangers. The Black hole assault, Wormhole attack, Selective Forwarding attack, Sybil attack, and Denial-of-Service attack are the specific topics covered in this thesis. In this paper, we describe a trust-based safe routing protocol with the goal of mitigating the interference of black hole nodes in the course of routing in mobile ad-hoc networks. The overall performance of the network is negatively impacted when there are black hole nodes in the route that routing takes. As a result, we have developed a routing protocol that reduces the likelihood that packets would be lost as a result of black hole nodes. This routing system has been subjected to experimental testing in order to guarantee that the most secure path will be selected for the delivery of packets between a source and a destination. The invasion of wormholes into a wireless network results in the segmentation of the network as well as a disorder in the routing. As a result, we provide an effective approach for locating wormholes by using ordinal multi-dimensional scaling and round trip duration in wireless ad hoc networks with either sparse or dense topologies. Wormholes that are linked by both short route and long path wormhole linkages may be found using the approach that was given. In order to guarantee that this ad hoc network does not include any wormholes that go unnoticed, this method is subjected to experimental testing. In order to fight against selective forwarding attacks in wireless ad-hoc networks, we have developed three different techniques. The first method is an incentive-based algorithm that makes use of a reward-punishment system to drive cooperation among three nodes for the purpose of vi forwarding messages in crowded ad-hoc networks. A unique adversarial model has been developed by our team, and inside it, three distinct types of nodes and the activities they participate in are specified. We have shown that the suggested strategy that is based on incentives prohibits nodes from adopting an individualistic behaviour, which ensures collaboration in the process of packet forwarding. To guarantee that intermediate nodes in resource-constrained ad-hoc networks accurately convey packets, the second approach proposes a game theoretic model that uses non-cooperative game theory. This model is based on the idea that game theory may be used. This game reaches a condition of desired equilibrium, which assures that cooperation in multi-hop communication is physically possible, and it is this state that is discovered. In the third algorithm, we present a detection approach that locates malicious nodes in multihop hierarchical ad-hoc networks by employing binary search and control packets. We have shown that the cluster head is capable of accurately identifying the malicious node by analysing the sequences of packets that are dropped along the path leading from a source node to the cluster head. A lightweight symmetric encryption technique that uses Binary Playfair is presented here as a means of safeguarding the transport of data. We demonstrate via experimentation that the suggested encryption method is efficient with regard to the amount of energy used, the amount of time required for encryption, and the memory overhead. This lightweight encryption technique is used in clustered wireless ad-hoc networks to reduce the likelihood of a sybil attack occurring in such networks

12 citations

Journal ArticleDOI
01 Apr 2022-1
TL;DR: In architecture the summary is at the end of the process from beginning to end and the final product, and in data summary it involves manipulating data in meaningful ways.
Abstract: In architecture the summary is at Presents from beginning to end and the final product. Abstract is used as a method of gaining environmental knowledge to develop conceptual stages of the design process. Summary Vehicle functions or ATM functions are excellent examples of contractions in the real world. n Electrical switchboard is one of the real world examples of abstraction. A switchboard gives us an easy way to turn electrical devices on or off, hiding all the details of the electrical circuit. Description: The summary applies to both. Control contraction is the use of subroutines to control the contraction of the flow. Data summary involves manipulating data in meaningful ways. Security Summary allows companies to immediately identify the purpose of each event and use the best security particles with relevant capabilities to deal with the threat. If you want to define the method for public classes, the summary will be useful. For example, if there are multiple classes, they use the same method. In this case, you can use the compression method. Can be achieved through the protocol in the Swift interface. Quick summary can be achieved without parenting in the protocol-extension class. Minimize the problem and increase performance. Architects are generally highly respected in the community and if you want to be seen as a respected person in the community, architecture is a great career opportunity! Because of their creativity and attention to detail, they are considered a blend of art and ingenuity.

11 citations

Proceedings ArticleDOI
16 Oct 2022
TL;DR: In this article , the tumor segmentation process using Region Growth Algorithm with Gray-Level-Run-Length-Matrix and Centre-Symmetric-Local Binary-Patterns texture feature extraction process.
Abstract: The physical identification of tumors may be a laborious and time-consuming process for medical professionals because of the complex nature of the tumor and the noise involution that can occur in magnetic resonance (MR) imaging information. Therefore, determining the location of the tumor at an earlier stage is quite important. The medical scan may track and prognosticate the uncontrolled proliferation of cancer pretentious regions at different levels in order to deliver a felicitous diagnosis at an early time. This is accomplished via the utilisation of segmentation in conjunction with relegation procedures. In order to recognize the tissues of a brain tumor, segmentation of the picture obtained from the MRI is a crucial and challenging step. So, the proposed work includes the tumor segmentation process using Region Growth Algorithm with Gray-Level-Run-Length-Matrix and Centre-Symmetric-Local-Binary-Patterns texture feature extraction process. The segmented images undergo feature extraction process with higher level of accuracy. The performance metrics are measured using accuracy, sensitivity and specificity. The proposed work has 0.97% sensitivity, 0.85% specificity and 99.80% accuracy.

11 citations