Showing papers in "Electronics in 2023"
TL;DR: In this article , a parallel platform solution for high-precision machining equipment based on the Stewart six degrees of freedom parallel platform is presented. But not one can provide a common physical platform to test the effectiveness of a variety of control algorithms.
Abstract: With the rapid development of the manufacturing industry, industrial automation equipment represented by computer numerical control (CNC) machine tools has put forward higher and higher requirements for the machining accuracy of parts. Compared with the multi-axis serial platform solution, the parallel platform solution is theoretically more suitable for high-precision machining equipment. There are many parallel platform solutions, but not one can provide a common physical platform to test the effectiveness of a variety of control algorithms. To achieve the goals, this paper is based on the Stewart six degrees of freedom parallel platform, and it mainly studies the platform construction. This study completed the mechanical structure design of the parallel platform. Based on the microprogrammed control unit (MCU) + pre-driver chip + three-phase full bridge solution, we have completed the circuit design of the motor driver. We wrote the program of MCU to drive six parallel robotic arms as well as the program of the parallel platform control center on the PC, and we completed the system joint debugging. The closed-loop control effect of the parallel platform workspace pose is realized.
21 citations
TL;DR: Wang et al. as discussed by the authors proposed an approach named Ghost convolution with BottleneckCSP and a tiny target prediction head incorporating YOLOv5 (GBH-YOLO v5) for PV panel defect detection.
Abstract: Photovoltaic (PV) panel surface-defect detection technology is crucial for the PV industry to perform smart maintenance. Using computer vision technology to detect PV panel surface defects can ensure better accuracy while reducing the workload of traditional worker field inspections. However, multiple tiny defects on the PV panel surface and the high similarity between different defects make it challenging to accurately identify and detect such defects. This paper proposes an approach named Ghost convolution with BottleneckCSP and a tiny target prediction head incorporating YOLOv5 (GBH-YOLOv5) for PV panel defect detection. To ensure better accuracy on multiscale targets, the BottleneckCSP module is introduced to add a prediction head for tiny target detection to alleviate tiny defect misses, using Ghost convolution to improve the model inference speed and reduce the number of parameters. First, the original image is compressed and cropped to enlarge the defect size physically. Then, the processed images are input into GBH-YOLOv5, and the depth features are extracted through network processing based on Ghost convolution, the application of the BottleneckCSP module, and the prediction head of tiny targets. Finally, the extracted features are classified by a Feature Pyramid Network (FPN) and a Path Aggregation Network (PAN) structure. Meanwhile, we compare our method with state-of-the-art methods to verify the effectiveness of the proposed method. The proposed PV panel surface-defect detection network improves the mAP performance by at least 27.8%.
14 citations
TL;DR: In this paper , a model-based, chattering-free sliding mode control (CFSMC) algorithm is developed to maintain a desired heating value trajectory of the syngas mixture.
Abstract: The fluctuations in the heating value of an underground coal gasification (UCG) process limit its application in electricity generation, where a desired composition of the combustible gases is required to operate gas turbines efficiently. This shortcoming can be addressed by designing a robust control scheme for the process. In the current research work, a model-based, chattering-free sliding mode control (CFSMC) algorithm is developed to maintain a desired heating value trajectory of the syngas mixture. Besides robustness, CFSMC yields reduced chattering due to continuous control law, and the tracking error also converges in finite time. To estimate the unmeasurable states required for the controller synthesis, a state-dependent Kalman filter (SDKF) based on the quasi-linear decomposition of the nonlinear model is employed. The simulation results demonstrate that despite the external disturbance and measurement noise, the control methodology yields good tracking performance. A comparative analysis is also made between CFSMC, a conventional SMC, and an already designed dynamic integral SMC (DISMC), which shows that CFSMC yields 71.2% and 69.9% improvement in the root mean squared tracking error with respect to SMC and DISMC, respectively. Moreover, CFSMC consumes 97% and 23.2% less control energy as compared to SMC and DISMC, respectively.
14 citations
TL;DR: Wang et al. as mentioned in this paper focused on the healthcare security issues in blockchain and sort out the security risks in six layers of blockchain technology by comparing and analyzing existing security measures, which promotes theoretical research and robust security protocol development in the current and future distributed work environment.
Abstract: Blockchain technology provides a data structure with inherent security properties that include cryptography, decentralization, and consensus, which ensure trust in transactions. It covers widely applicable usages, such as in intelligent manufacturing, finance, the Internet of things (IoT), medicine and health, and many different areas, especially in medical health data security and privacy protection areas. Its natural attributes, such as contracts and consensus mechanisms, have leading-edge advantages in protecting data confidentiality, integrity, and availability. The security issues are gradually revealed with in-depth research and vigorous development. Unlike traditional paper storage methods, modern medical records are stored electronically. Blockchain technology provided a decentralized solution to the trust-less issues between distrusting parties without third-party guarantees, but the “trust-less” security through technology was easily misunderstood and hindered the security differences between public and private blockchains appropriately. The mentioned advantages and disadvantages motivated us to provide an advancement and comprehensive study regarding the applicability of blockchain technology. This paper focuses on the healthcare security issues in blockchain and sorts out the security risks in six layers of blockchain technology by comparing and analyzing existing security measures. It also explores and defines the different security attacks and challenges when applying blockchain technology, which promotes theoretical research and robust security protocol development in the current and future distributed work environment.
12 citations
TL;DR: In this paper , a general data augmentation technique for various scenarios is proposed, which examines the predicament of parallel corpora diversity and high quality in both rich- and low-resource settings, and integrates the low-frequency word substitution method and reverse translation approach for complementary benefits.
Abstract: Amid the rapid advancement of neural machine translation, the challenge of data sparsity has been a major obstacle. To address this issue, this study proposes a general data augmentation technique for various scenarios. It examines the predicament of parallel corpora diversity and high quality in both rich- and low-resource settings, and integrates the low-frequency word substitution method and reverse translation approach for complementary benefits. Additionally, this method improves the pseudo-parallel corpus generated by the reverse translation method by substituting low-frequency words and includes a grammar error correction module to reduce grammatical errors in low-resource scenarios. The experimental data are partitioned into rich- and low-resource scenarios at a 10:1 ratio. It verifies the necessity of grammatical error correction for pseudo-corpus in low-resource scenarios. Models and methods are chosen from the backbone network and related literature for comparative experiments. The experimental findings demonstrate that the data augmentation approach proposed in this study is suitable for both rich- and low-resource scenarios and is effective in enhancing the training corpus to improve the performance of translation tasks.
12 citations
TL;DR: In this article , the authors proposed a blockchain sawtooth-enabled modular architecture for protected, secure, and trusted execution, service delivery, and acknowledgment with immutable ledger storage and security and peer-to-peer network on-chain and off-chain intercommunication for vehicular activities.
Abstract: The vast enhancement in the development of the Internet of Vehicles (IoV) is due to the impact of the distributed emerging technology and topology of the industrial IoV. It has created a new paradigm, such as the security-related resource constraints of Industry 5.0. A new revolution and dimension in the IoV popup raise various critical challenges in the existing information preservation, especially in node transactions and communication, transmission, trust and privacy, and security-protection-related problems, which have been analyzed. These aspects pose serious problems for the industry to provide vehicular-related data integrity, availability, information exchange reliability, provenance, and trustworthiness for the overall activities and service delivery prospects against the increasing number of multiple transactions. In addition, there has been a lot of research interest that intersects with blockchain and Internet of Vehicles association. In this regard, the inadequate performance of the Internet of Vehicles and connected nodes and the high resource requirements of the consortium blockchain ledger have not yet been tackled with a complete solution. The introduction of the NuCypher Re-encryption infrastructure, hashing tree and allocation, and blockchain proof-of-work require more computational power as well. This paper contributes in two different folds. First, it proposes a blockchain sawtooth-enabled modular architecture for protected, secure, and trusted execution, service delivery, and acknowledgment with immutable ledger storage and security and peer-to-peer (P2P) network on-chain and off-chain inter-communication for vehicular activities. Secondly, we design and create a smart contract-enabled data structure in order to provide smooth industrial node streamlined transactions and broadcast content. Substantially, we develop and deploy a hyperledger sawtooth-aware customized consensus for multiple proof-of-work investigations. For validation purposes, we simulate the exchange of information and related details between connected devices on the IoV. The simulation results show that the proposed architecture of BIoV reduces the cost of computational power down to 37.21% and the robust node generation and exchange up to 56.33%. Therefore, only 41.93% and 47.31% of the Internet of Vehicles-related resources and network constraints are kept and used, respectively.
11 citations
TL;DR: In this article, a learning-based resource segmentation (RS) technique is proposed to handle the resource allocation problem in a 5G wireless network, where a modified Random Forest Algorithm (RFA) with Signal Interference and Noise Ratio (SINR) and position coordinates are used to obtain the position coordinates of end-users.
Abstract: A 5G wireless network requires an efficient approach to effectively manage and segment the resource. A Centralized Radio Access Network (CRAN) is used to handle complex distributed networks. Specific to network infrastructure, multicast communication is considered in the performance of data storage and information-based network connectivity. This paper proposes a modified Resource Allocation (RA) scheme for effectively handling the RA problem using a learning-based Resource Segmentation (RS) technique. It uses a modified Random Forest Algorithm (RFA) with Signal Interference and Noise Ratio (SINR) and position coordinates to obtain the position coordinates of end-users. Further, it predicts Modulation and Coding Schemes (MCS) for establishing a connection between the end-user device and the Remote Radio Head (RRH). The proposed algorithm depends on the accuracy of positional coordinates for the correctness of the input parameters, such as SINR, based on the position and orientation of the antenna. The simulation analysis renders the efficiency of the proposed technique in terms of throughput and energy efficiency.
11 citations
TL;DR: In this article , a real-time interactive system for providing medical services to the needy who do not have a sufficient medical infrastructure is proposed, which consists of many modules, such as the user interface, analytics, cloud, etc.
Abstract: The Internet of Medical Things (IoMT) is an extended version of the Internet of Things (IoT). It mainly concentrates on the integration of medical things for servicing needy people who cannot get medical services easily, especially rural area people and aged peoples living alone. The main objective of this work is to design a real time interactive system for providing medical services to the needy who do not have a sufficient medical infrastructure. With the help of this system, people will get medical services at their end with minimal medical infrastructure and less treatment cost. However, the designed system could be upgraded to address the family of SARs viruses, and for experimentation, we have taken COVID-19 as a test case. The proposed system comprises of many modules, such as the user interface, analytics, cloud, etc. The proposed user interface is designed for interactive data collection. At the initial stage, it collects preliminary medical information, such as the pulse oxygen rate and RT-PCR results. With the help of a pulse oximeter, they could get the pulse oxygen level. With the help of swap test kit, they could find COVID-19 positivity. That information is uploaded as preliminary information to the designed proposed system via the designed UI. If the system identifies the COVID positivity, it requests that the person upload X-ray/CT images for ranking the severity of the disease. The system is designed for multi-model data. Hence, it can deal with X-ray, CT images, and textual data (RT-PCR results). Once X-ray/CT images are collected via the designed UI, those images are forwarded to the designed AI module for analytics. The proposed AI system is designed for multi-disease classification. It classifies the patients affected with COVID-19 or pneumonia or any other viral infection. It also measures the intensity level of lung infection for providing suitable treatment to the patients. Numerous deep convolution neural network (DCNN) architectures are available for medical image classification. We used ResNet-50, ResNet-100, ResNet-101, VGG 16, and VGG 19 for better classification. From the experimentation, it observed that ResNet101 and VGG 19 outperform, with an accuracy of 97% for CT images. ResNet101 outperforms with an accuracy of 98% for X-ray images. For obtaining enhanced accuracy, we used a major voting classifier. It combines all the classifiers result and presents the majority voted one. It results in reduced classifier bias. Finally, the proposed system presents an automatic test summary report textually. It can be accessed via user-friendly graphical user interface (GUI). It results in a reduced report generation time and individual bias.
10 citations
TL;DR: In this paper , the authors present the key to enabling explainable Artificial Intelligence (XAI) technologies for smart cities in detail and discuss the use cases, challenges, applications, possible alternative solutions and current and future research enhancements.
Abstract: The emergence of Explainable Artificial Intelligence (XAI) has enhanced the lives of humans and envisioned the concept of smart cities using informed actions, enhanced user interpretations and explanations, and firm decision-making processes. The XAI systems can unbox the potential of black-box AI models and describe them explicitly. The study comprehensively surveys the current and future developments in XAI technologies for smart cities. It also highlights the societal, industrial, and technological trends that initiate the drive towards XAI for smart cities. It presents the key to enabling XAI technologies for smart cities in detail. The paper also discusses the concept of XAI for smart cities, various XAI technology use cases, challenges, applications, possible alternative solutions, and current and future research enhancements. Research projects and activities, including standardization efforts toward developing XAI for smart cities, are outlined in detail. The lessons learned from state-of-the-art research are summarized, and various technical challenges are discussed to shed new light on future research possibilities. The presented study on XAI for smart cities is a first-of-its-kind, rigorous, and detailed study to assist future researchers in implementing XAI-driven systems, architectures, and applications for smart cities.
9 citations
TL;DR: In this paper , two novel methods for improving the spectrum efficiency (SE) of the downlink NOMA power domain (PD) integrated with a cooperative cognitive radio network (CCRN) in a 5G network using single-input and single-output (SISO), MIMO, and massive-MIMO in the same network and in a single cell.
Abstract: Non-orthogonal multiple access (NOMA) is one of the most effective techniques for meeting the spectrum efficiency (SE) requirements of 5G and beyond networks. This paper presents two novel methods for improving the SE of the downlink (DL) NOMA power domain (PD) integrated with a cooperative cognitive radio network (CCRN) in a 5G network using single-input and single-output (SISO), multiple-input and multiple-output (MIMO), and massive MIMO (M-MIMO) in the same network and in a single cell. In the first method, NOMA users compete for free channels in a competing channel (C-CH) on the CCRN. The second method provides NOMA users with a dedicated channel (D-CH) with high priority. The proposed methods are evaluated using the Matlab software program using the three scenarios with different distances, power location coefficients, and transmitting power. Four users are assumed to operate on 80 MHz bandwidths (BWs) and use the quadrature phase shift keying (QPSK) modulation technique in all three scenarios. Successive interference cancellation (SIC) and unstable channel conditions are also considered when evaluating the performance of the proposed system under the assumption of frequency selective Rayleigh fading. The best four-user SE performance obtained by user U 4 was 3.9 bps/Hz/cell for SISO DL NOMA, 5.1 bps/Hz/cell for SISO DL NOMA with CCRN with C-CH, and 7.2 bps/Hz/cell for SISO DL NOMA with CCRN with D-CH at 40 dBm transmit power. While 64 × 64 MIMO DL NOMA improved SE performance of the best-use U 4 by 51%, 64 × 64 MIMO DL NOMA with C-CH CCRN enhanced SE performance by 64%, and 64 x 64 MIMO DL NOMA with D-CH CCRN boosted performance by 65% SE compared to SISO DL NOMA at 40 dB transmit power. While 128 x 128 M-MIMO DL NOMA improved SE performance for the best U4 user by 79%, 128 x 128 M-MIMO DL NOMA with C-CH CCRN boosted SE performance by 85%, and 128 x 128 M-MIMO DL NOMA with D-CH CCRN enhanced SE performance by 86% when compared to SISO DL NOMA SE performance at 40 dB transmit power. We discovered that the second proposed method, when using D-CH with CCR-NOMA, produced the best SE performance for users. On the other hand, the spectral efficiency is significantly increased when applying MIMO and M-MIMO techniques.
9 citations
TL;DR: Wang et al. as discussed by the authors design a human-computer interaction system framework, which includes speech recognition, text-to-speech, dialogue systems, and virtual human generation, and classify the model of talking-head video generation by the virtual human deep generation framework.
Abstract: Virtual human is widely employed in various industries, including personal assistance, intelligent customer service, and online education, thanks to the rapid development of artificial intelligence. An anthropomorphic digital human can quickly contact people and enhance user experience in human–computer interaction. Hence, we design the human–computer interaction system framework, which includes speech recognition, text-to-speech, dialogue systems, and virtual human generation. Next, we classify the model of talking-head video generation by the virtual human deep generation framework. Meanwhile, we systematically review the past five years’ worth of technological advancements and trends in talking-head video generation, highlight the critical works and summarize the dataset.
TL;DR: In this paper , the authors discuss the most common machine learning methods for forecasting building energy demand, and investigate how the various SG, IoT, and ML components integrate and operate using a simple architecture with layers organized into entities that communicate with one another via connections.
Abstract: With the assistance of machine learning, difficult tasks can be completed entirely on their own. In a smart grid (SG), computers and mobile devices may make it easier to control the interior temperature, monitor security, and perform routine maintenance. The Internet of Things (IoT) is used to connect the various components of smart buildings. As the IoT concept spreads, SGs are being integrated into larger networks. The IoT is an important part of SGs because it provides services that improve everyone’s lives. It has been established that the current life support systems are safe and effective at sustaining life. The primary goal of this research is to determine the motivation for IoT device installation in smart buildings and the grid. From this vantage point, the infrastructure that supports IoT devices and the components that comprise them is critical. The remote configuration of smart grid monitoring systems can improve the security and comfort of building occupants. Sensors are required to operate and monitor everything from consumer electronics to SGs. Network-connected devices should consume less energy and be remotely monitorable. The authors’ goal is to aid in the development of solutions based on AI, IoT, and SGs. Furthermore, the authors investigate networking, machine intelligence, and SG. Finally, we examine research on SG and IoT. Several IoT platform components are subject to debate. The first section of this paper discusses the most common machine learning methods for forecasting building energy demand. The authors then discuss IoT and how it works, in addition to the SG and smart meters, which are required for receiving real-time energy data. Then, we investigate how the various SG, IoT, and ML components integrate and operate using a simple architecture with layers organized into entities that communicate with one another via connections.
TL;DR: In this paper , the acoustic feature set based on Mel frequency cepstral coefficients (MFCC), linear prediction Cepstrals coefficients (LPCC), wavelet packet transform (WPT), zero crossing rate (ZCR), spectrum centroid, spectral roll-off, spectral kurtosis, root mean square (RMS), pitch, jitter, and shimmer to improve the feature distinctiveness.
Abstract: Speech emotion recognition (SER) plays a vital role in human–machine interaction. A large number of SER schemes have been anticipated over the last decade. However, the performance of the SER systems is challenging due to the high complexity of the systems, poor feature distinctiveness, and noise. This paper presents the acoustic feature set based on Mel frequency cepstral coefficients (MFCC), linear prediction cepstral coefficients (LPCC), wavelet packet transform (WPT), zero crossing rate (ZCR), spectrum centroid, spectral roll-off, spectral kurtosis, root mean square (RMS), pitch, jitter, and shimmer to improve the feature distinctiveness. Further, a lightweight compact one-dimensional deep convolutional neural network (1-D DCNN) is used to minimize the computational complexity and to represent the long-term dependencies of the speech emotion signal. The overall effectiveness of the proposed SER systems’ performance is evaluated on the Berlin Database of Emotional Speech (EMODB) and the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets. The proposed system gives an overall accuracy of 93.31% and 94.18% for the EMODB and RAVDESS datasets, respectively. The proposed MFCC and 1-D DCNN provide greater accuracy and outpace the traditional SER techniques.
TL;DR: In this article , a deep learning model with a customized architecture for detecting acute leukemia using images of lymphocytes and monocytes was designed, which achieved a 99% accuracy rate in diagnosing acute leukemia types, including ALL and AML.
Abstract: The production of blood cells is affected by leukemia, a type of bone marrow cancer or blood cancer. Deoxyribonucleic acid (DNA) is related to immature cells, particularly white cells, and is damaged in various ways in this disease. When a radiologist is involved in diagnosing acute leukemia cells, the diagnosis is time consuming and needs to provide better accuracy. For this purpose, many types of research have been conducted for the automatic diagnosis of acute leukemia. However, these studies have low detection speed and accuracy. Machine learning and artificial intelligence techniques are now playing an essential role in medical sciences, particularly in detecting and classifying leukemic cells. These methods assist doctors in detecting diseases earlier, reducing their workload and the possibility of errors. This research aims to design a deep learning model with a customized architecture for detecting acute leukemia using images of lymphocytes and monocytes. This study presents a novel dataset containing images of Acute Lymphoblastic Leukemia (ALL) and Acute Myeloid Leukemia (AML). The new dataset has been created with the assistance of various experts to help the scientific community in its efforts to incorporate machine learning techniques into medical research. Increasing the scale of the dataset is achieved with a Generative Adversarial Network (GAN). The proposed CNN model based on the Tversky loss function includes six convolution layers, four dense layers, and a Softmax activation function for the classification of acute leukemia images. The proposed model achieved a 99% accuracy rate in diagnosing acute leukemia types, including ALL and AML. Compared to previous research, the proposed network provides a promising performance in terms of speed and accuracy; and based on the results, the proposed model can be used to assist doctors and specialists in practical applications.
TL;DR: In this paper , a memristive chaotic system with transcendental nonlinearities was extended to the fractional-order domain, which was employed in the substitution stage of an image encryption algorithm, including a generalized Arnold map for the permutation.
Abstract: The work in this paper extends a memristive chaotic system with transcendental nonlinearities to the fractional-order domain. The extended system’s chaotic properties were validated through bifurcation analysis and spectral entropy. The presented system was employed in the substitution stage of an image encryption algorithm, including a generalized Arnold map for the permutation. The encryption scheme demonstrated its efficiency through statistical tests, key sensitivity analysis and resistance to brute force and differential attacks. The fractional-order memristive system includes a reconfigurable coordinate rotation digital computer (CORDIC) and Grünwald–Letnikov (GL) architectures, which are essential for trigonometric and hyperbolic functions and fractional-order operator implementations, respectively. The proposed system was implemented on the Artix-7 FPGA board, achieving a throughput of 0.396 Gbit/s.
TL;DR: In this paper , a base station array antenna in 1 × 6 configuration is proposed for sub-6GHz 5G applications. And the proposed antenna provides a stable and high gain of 11-18 dB using reflectors with sidewalls, and its electrical downward tilt is investigated for a 1 ×6 array arrangement with dimensions of 642 mm × 112 mm × 90 mm.
Abstract: In this article, a base station array antenna in 1 × 6 configuration is proposed for sub-6GHz 5G applications. Analyses have been performed on two orthogonally arranged dipole strips, a balun with various feeding schemes, and a reflector with different side walls. At the balanced feed position, aluminum is used to connect the feeding balun and the dipole through a hole. A single crossed antenna element of size 66 × 66 × 78 mm3 is fabricated using an FR-4 substrate with a dielectric constant of 4.4, 1.6 mm thickness, and an operating frequency band from 3.2 to 5.22 GHz. The radiating element provides a stable and high gain of 11–18 dB using reflectors with sidewalls. The proposed element is simulated, and its electrical downward tilt is investigated for a 1 × 6 array arrangement with dimensions of 642 mm × 112 mm × 90 mm. Various radiation performance parameters are measured, such as gain, FBR (>26 dB), HPBW, and XPD (>11.5 dB) at 60° in the H-plane. A reflection coefficient of less than −15 dB and port-to-port isolation of greater than 27 dB are achieved. Simulation and measurement of radiation patterns are performed for the operating frequencies of 3.2, 4.2, and 5.2 GHz.
TL;DR: In this article , a computer vision model for estimation of the coordinates of objects of interest, as well as the subsequent recalculation of coordinates relative to the control of the manipulator to form a control action was developed.
Abstract: Modern deep learning systems make it possible to develop increasingly intelligent solutions in various fields of science and technology. The electronics of single board computers facilitate the control of various robotic solutions. At the same time, the implementation of such tasks does not require a large amount of resources. However, deep learning models still require a high level of computing power. Thus, the effective control of an intelligent robot manipulator is possible when a computationally complex deep learning model on GPU graphics devices and a mechanics control unit on a single-board computer work together. In this regard, the study is devoted to the development of a computer vision model for estimation of the coordinates of objects of interest, as well as the subsequent recalculation of coordinates relative to the control of the manipulator to form a control action. In addition, in the simulation environment, a reinforcement learning model was developed to determine the optimal path for picking apples from 2D images. The detection efficiency on the test images was 92%, and in the laboratory it was possible to achieve 100% detection of apples. In addition, an algorithm has been trained that provides adequate guidance to apples located at a distance of 1 m along the Z axis. Thus, the original neural network used to recognize apples was trained using a big image dataset, algorithms for estimating the coordinates of apples were developed and investigated, and the use of reinforcement learning was suggested to optimize the picking policy.
TL;DR: In this paper , a deep learning model approach was proposed for predicting breast cancer risk primarily on this foundation, which is based on transfer learning using the InceptionResNetV2 deep learning.
Abstract: Cancer is a complicated global health concern with a significant fatality rate. Breast cancer is among the leading causes of mortality each year. Advancements in prognoses have been progressively based primarily on the expression of genes, offering insight into robust and appropriate healthcare decisions, owing to the fast growth of advanced throughput sequencing techniques and the use of various deep learning approaches that have arisen in the past few years. Diagnostic-imaging disease indicators such as breast density and tissue texture are widely used by physicians and automated technology. The effective and specific identification of cancer risk presence can be used to inform tailored screening and preventive decisions. For several classifications and prediction applications, such as breast imaging, deep learning has increasingly emerged as an effective method. We present a deep learning model approach for predicting breast cancer risk primarily on this foundation. The proposed methodology is based on transfer learning using the InceptionResNetV2 deep learning model. Our experimental work on a breast cancer dataset demonstrates high model performance, with 91% accuracy. The proposed model includes risk markers that are used to improve breast cancer risk assessment scores and presents promising results compared to existing approaches. Deep learning models include risk markers that are used to improve accuracy scores. This article depicts breast cancer risk indicators, defines the proper usage, features, and limits of each risk forecasting model, and examines the increasing role of deep learning (DL) in risk detection. The proposed model could potentially be used to automate various types of medical imaging techniques.
TL;DR: In this paper , the authors proposed a fog computing strategy for 5G-enabled automotive networks that is based on the Chebyshev polynomial and allows for the revocation of pseudonyms.
Abstract: The privacy and security of the information exchanged between automobiles in 5G-enabled vehicular networks is at risk. Several academics have offered a solution to these problems in the form of an authentication technique that uses an elliptic curve or bilinear pair to sign messages and verify the signature. The problem is that these tasks are lengthy and difficult to execute effectively. Further, the needs for revoking a pseudonym in a vehicular network are not met by these approaches. Thus, this research offers a fog computing strategy for 5G-enabled automotive networks that is based on the Chebyshev polynomial and allows for the revocation of pseudonyms. Our solution eliminates the threat of an insider attack by making use of fog computing. In particular, the fog server does not renew the signature key when the validity period of a pseudonym-ID is about to end. In addition to meeting privacy and security requirements, our proposal is also resistant to a wide range of potential security breaches. Finally, the Chebyshev polynomial is used in our work to sign the message and verify the signature, resulting in a greater performance cost efficiency than would otherwise be possible if an elliptic curve or bilinear pair operation had been employed.
TL;DR: In this article , the authors proposed a technique in which the distribution obtained from the cover image determines the pixels that attain a peak or zero distribution, and then adjacent histogram bins of the peak point are shifted, and data embedding is performed using the least significant bit (LSB) technique in the peak pixels.
Abstract: Reversible data hiding (RDH) techniques recover the original cover image after data extraction. Thus, they have gained popularity in e-healthcare, law forensics, and military applications. However, histogram shifting using a reversible data embedding technique suffers from low embedding capacity and high variability. This work proposes a technique in which the distribution obtained from the cover image determines the pixels that attain a peak or zero distribution. Afterward, adjacent histogram bins of the peak point are shifted, and data embedding is performed using the least significant bit (LSB) technique in the peak pixels. Furthermore, the robustness and embedding capacity are improved using the proposed dynamic block-wise reversible embedding strategy. Besides, the secret data are encrypted before embedding to further strengthen security. The experimental evaluation suggests that the proposed work attains superior stego images with a peak signal-to-noise ratio (PSNR) of more than 58 dB for 0.9 bits per pixel (BPP). Additionally, the results of the two-sample t-test and the Kolmogorov–Smirnov test reveal that the proposed work is resistant to attacks.
TL;DR: In this article , the authors identify how disruptive technologies have evolved over time and their current acceptation, and extract the most prominent disruptive technologies, besides AI, that are in use today.
Abstract: The greatest technological changes in our lives are predicted to be brought about by Artificial Intelligence (AI). Together with the Internet of Things (IoT), blockchain, and several others, AI is considered to be the most disruptive technology, and has impacted numerous sectors, such as healthcare (medicine), business, agriculture, education, and urban development. The present research aims to achieve the following: identify how disruptive technologies have evolved over time and their current acceptation (1); extract the most prominent disruptive technologies, besides AI, that are in use today (2); and elaborate on the domains that were impacted by AI and how this occurred (3). Based on a sentiment analysis of the titles and abstracts, the results reveal that the majority of recent publications have a positive connotation with regard to the disruptive impact of edge technologies, and that the most prominent examples (the top five) are AI, the IoT, blockchain, 5G, and 3D printing. The disruptive effects of AI technology are still changing how people interact in the corporate, consumer, and professional sectors, while 5G and other mobile technologies will become highly disruptive and will genuinely revolutionize the landscape in all sectors in the upcoming years.
TL;DR: In this article , a 4D model was designed to detect drowsiness based on eye state, which achieved a 97.53% accuracy for predicting the eye state in the test dataset.
Abstract: There are a variety of potential uses for the classification of eye conditions, including tiredness detection, psychological condition evaluation, etc. Because of its significance, many studies utilizing typical neural network algorithms have already been published in the literature, with good results. Convolutional neural networks (CNNs) are employed in real-time applications to achieve two goals: high accuracy and speed. However, identifying drowsiness at an early stage significantly improves the chances of being saved from accidents. Drowsiness detection can be automated by using the potential of artificial intelligence (AI), which allows us to assess more cases in less time and with a lower cost. With the help of modern deep learning (DL) and digital image processing (DIP) techniques, in this paper, we suggest a CNN model for eye state categorization, and we tested it on three CNN models (VGG16, VGG19, and 4D). A novel CNN model named the 4D model was designed to detect drowsiness based on eye state. The MRL Eye dataset was used to train the model. When trained with training samples from the same dataset, the 4D model performed very well (around 97.53% accuracy for predicting the eye state in the test dataset). The 4D model outperformed the performance of two other pretrained models (VGG16, VGG19). This paper explains how to create a complete drowsiness detection system that predicts the state of a driver’s eyes to further determine the driver’s drowsy state and alerts the driver before any severe threats to road safety.
TL;DR: In this paper , the authors present a solution based on blockchain technology and smart contracts for agile project management in light of the continuing transition in the software development industry, where major stakeholders will be able to communicate through smart contracts, which will act as a bridge between them.
Abstract: We present a solution based on blockchain technology and smart contracts for agile project management in light of the continuing transition in the software development industry. Due to the fact that these technologies are self-executing, customized, and impervious to tampering, they are considered to be crucial for the transition to a more efficient, transparent, and transactive payment gateway between major stakeholders. These major stakeholders will be able to communicate through smart contracts, which will act as a bridge between them. As part of their responsibility, they will make sure that all of the terms of the contract are met and acknowledged by all members of the team. As a result of our research, we propose a model in which payouts could be automatically enabled and penalties or grants could be introduced based on performance. If any changes were to be made to the contract in the future, all parties involved would be automatically notified. To maintain the development cycle, they should accept these changes as soon as possible. Because of this, the product owner and client are able to concentrate their resources on more profitable and productive tasks, without the need to monitor this aspect of agile project management. Our proposed model brings together different partners with the objective of successfully developing different IT projects by leveraging software engineering solutions such as smart contracts.
TL;DR: In this article , a machine learning-based rating system was proposed to provide early warnings against financial fraud in electronic banking transactions, which can reduce the amount of successful fraud and improve call center queue administration.
Abstract: The number of fraud occurrences in electronic banking is rising each year. Experts in the field of cybercrime are continuously monitoring and verifying network infrastructure and transaction systems. Dedicated threat response teams (CSIRTs) are used by organizations to ensure security and stop cyber attacks. Financial institutions are well aware of this and have increased funding for CSIRTs and antifraud software. If the company has a rule-based antifraud system, the CSIRT can examine fraud cases and create rules to counter the threat. If not, they can attempt to analyze Internet traffic down to the packet level and look for anomalies before adding network rules to proxy or firewall servers to mitigate the threat. However, this does not always solve the issues, because transactions occasionally receive a “gray” rating. Nevertheless, the bank is unable to approve every gray transaction because the number of call center employees is insufficient to make this possible. In this study, we designed a machine-learning-based rating system that provides early warnings against financial fraud. We present the system architecture together with the new ML-based scoring extension, which examines customer logins from the banking transaction system. The suggested method enhances the organization’s rule-based fraud prevention system. Because they occur immediately after the client identification and authorization process, the system can quickly identify gray operations. The suggested method reduces the amount of successful fraud and improves call center queue administration.
TL;DR: In this article , the authors determine the impact of sensational and breaking news headlines on content credibility and recommend using sensational headlines with caution to maintain credibility, and show that the perception of sensationalism mediates the relation between the presence of breaking-news headlines and trust in the content of the information.
Abstract: The development of social media has triggered important changes in our society and in the way consumers read and trust online information. The presence of consumers in the online environment exposes them to a greater extent to various instances of fake news, which are spread more or less intentionally. Sensational and breaking-news-style information are one of the ways in which consumers’ attention is attracted, by posting exaggerated or distorted information. The objective of our research is to determine the impact of sensational and breaking news headlines on content credibility. In a mediation model, we show that the perception of sensationalism mediates the relation between the presence of breaking news headlines and trust in the content of the information. Based on our proposed model, the existence of breaking news headlines increases the consumers’ perception of sensationalism and reduces trust in news content. These results have important implications for patterns of news consumption. If a piece of information is presented in a sensational way, it might attract more consumers’ attention in the short term, but in the long run it will reduce the credibility of its content. Based on our research, we recommend using sensational headlines with caution to maintain credibility.
TL;DR: In this article , a machine learning model that can use publicly available data to forecast the occurrence of chronic kidney disease (CKD) was developed. But, the model is not always accurate because of their high degree of dependency on several sets of biological attributes.
Abstract: Clinical support systems are affected by the issue of high variance in terms of chronic disorder prognosis. This uncertainty is one of the principal causes for the demise of large populations around the world suffering from some fatal diseases such as chronic kidney disease (CKD). Due to this reason, the diagnosis of this disease is of great concern for healthcare systems. In such a case, machine learning can be used as an effective tool to reduce the randomness in clinical decision making. Conventional methods for the detection of chronic kidney disease are not always accurate because of their high degree of dependency on several sets of biological attributes. Machine learning is the process of training a machine using a vast collection of historical data for the purpose of intelligent classification. This work aims at developing a machine-learning model that can use a publicly available data to forecast the occurrence of chronic kidney disease. A set of data preprocessing steps were performed on this dataset in order to construct a generic model. This set of steps includes the appropriate imputation of missing data points, along with the balancing of data using the SMOTE algorithm and the scaling of the features. A statistical technique, namely, the chi-squared test, is used for the extraction of the least-required set of adequate and highly correlated features to the output. For the model training, a stack of supervised-learning techniques is used for the development of a robust machine-learning model. Out of all the applied learning techniques, support vector machine (SVM) and random forest (RF) achieved the lowest false-negative rates and test accuracy, equal to 99.33% and 98.67%, respectively. However, SVM achieved better results than RF did when validated with 10-fold cross-validation.
TL;DR: In this paper , a DNN method was proposed to combine synchronous sequences and heterogeneous features to more accurately generate candidates in e-learning platforms that face an exponential increase in the number of available online educational courses and learners.
Abstract: Commercial e-learning platforms have to overcome the challenge of resource overload and find the most suitable material for educators using a recommendation system (RS) when an exponential increase occurs in the amount of available online educational resources. Therefore, we propose a novel DNN method that combines synchronous sequences and heterogeneous features to more accurately generate candidates in e-learning platforms that face an exponential increase in the number of available online educational courses and learners. Mitigating the learners’ cold-start problem was also taken into consideration during the modeling. Grouping learners in the first phase, and combining sequence and heterogeneous data as embeddings into recommendations using deep neural networks, are the main concepts of the proposed approach. Empirical results confirmed the proposed solution’s potential. In particular, the precision rates were equal to 0.626 and 0.492 in the cases of Top-1 and Top-5 courses, respectively. Learners’ cold-start errors were 0.618 and 0.697 for 25 and 50 new learners.
TL;DR: In this paper , a shape-perpendicular magnetic anisotropy-double oxide layer magnetic tunnel junction (s-PMA DMTJ) was used to construct a potential logic-locking (LL) defensive mechanism.
Abstract: In recent years, discovering various vulnerabilities in the IC supply chain has raised security concerns in electronic systems. Recent research has proposed numerous attack and defense mechanisms involving various nanoelectronic devices. Spintronic devices are a viable choice among various nanoelectronic devices because of their non-volatility, ease of fabrication with a silicon substrate, randomization in space and time, etc. This work uses a shape-perpendicular magnetic anisotropy-double oxide layer magnetic tunnel junction (s-PMA DMTJ) to construct a potential logic-locking (LL) defensive mechanism. s-PMA DMTJs can be used for more realistic novel solutions of secure hardware design due to their improved thermal stability and area efficiency compared to traditional MTJs. The LL system’s critical design range and viability are investigated in this work and compared with other two-terminal MTJ designs using various circuit analysis techniques, such as Monte Carlo simulations, eye diagram analysis, transient measurement, and parametric simulations. Hamming Distance of 25%, and output corruption coverage of 100% are achieved in the investigated test circuit.
TL;DR: In this paper , a 1 KB memory array was created using CMOS technology and a supply voltage of 0.6 volts employing a 1-bit 6T SRAM cell, with a minimum leakage current of 18.65 pA and an average delay of 19 ns.
Abstract: Computer memory comprises temporarily or permanently stored data and instructions, which are utilized in electronic digital computers. The opposite of serial access memory is Random Access Memory (RAM), where the memory is accessed immediately for both reading and writing operations. There has been a vast technological improvement, which has led to tremendous information on the amount of complexity that can be designed on a single chip. Small feature sizes, low power requirements, low costs, and great performance have emerged as the essential attributes of any electronic component. Designers have been forced into the sub-micron realm for all these reasons, which places the leakage characteristics front and centre. Many electrical parts, especially digital ones, are made to store data, emphasising the need for memory. The largest factor in the power consumption of SRAM is the leakage current. In this article, a 1 KB memory array was created using CMOS technology and a supply voltage of 0.6 volts employing a 1-bit 6T SRAM cell. We developed this SRAM with a 1-bit, 32- × 1-bit, and 32 × 32 configuration. The array structure was implemented using a 6T SRAM cell with a minimum leakage current of 18.65 pA and an average delay of 19 ns. The array structure was implemented using a 6T SRAM cell with a power consumption of 48.22 μW and 385 μW for read and write operations. The proposed 32 × 32 memory array SRAM performed better than the existing 8T SRAM and 7T SRAM in terms of power consumption for read and write operations. Using the Cadence Virtuoso tool (Version IC6.1.8-64b.500.14) and 22 nm technology, the functionality of a 1 KB SRAM array was verified.
TL;DR: In this article , the authors proposed the KPE-YOLOv5 algorithm aiming to improve the ability of small target detection, which achieved more accurate size of anchor-boxes for small targets by K-means++ clustering technology.
Abstract: At present, the existing methods have many limitations in small target detection, such as low accuracy, a high rate of false detection, and missed detection. This paper proposes the KPE-YOLOv5 algorithm aiming to improve the ability of small target detection. The algorithm has three improvements based on the YOLOv5 algorithm. Firstly, it achieves more accurate size of anchor-boxes for small targets by K-means++ clustering technology. Secondly, the scSE (spatial and channel compression and excitation) attention module is integrated into the new algorithm to encourage the backbone network to pay greater attention to the feature information of small targets. Finally, the capability of small target feature extraction is improved by increasing the small target detection layer, which also increases the detection accuracy of small targets. We evaluate KPE-YOLOv5 on the VisDrone-2020 dataset and compare performance with YOLOv5. The results show that KPE-YOLOv5 improves the detection mAP by 5.3% and increases the P by 7%. The KPE-YOLOv5 algorithm has better detection outcome than YOLOv5 for small target detection.