scispace - formally typeset
Search or ask a question
Author

Mohsin Jamil

Bio: Mohsin Jamil is an academic researcher from Memorial University of Newfoundland. The author has contributed to research in topics: Computer science & Control theory. The author has an hindex of 20, co-authored 141 publications receiving 1243 citations. Previous affiliations of Mohsin Jamil include Islamic University & National University of Science and Technology.


Papers
More filters
Journal ArticleDOI
01 Aug 2018-Sensors
TL;DR: CNN significantly improved performance and increased robustness over time compared with standard LDA with associated handcrafted features, and this data-driven features extraction approach may overcome the problem of the feature calibration and selection in myoelectric control.
Abstract: Pattern recognition of electromyography (EMG) signals can potentially improve the performance of myoelectric control for upper limb prostheses with respect to current clinical approaches based on direct control. However, the choice of features for classification is challenging and impacts long-term performance. Here, we propose the use of EMG raw signals as direct inputs to deep networks with intrinsic feature extraction capabilities recorded over multiple days. Seven able-bodied subjects performed six active motions (plus rest), and EMG signals were recorded for 15 consecutive days with two sessions per day using the MYO armband (MYB, a wearable EMG sensor). The classification was performed by a convolutional neural network (CNN) with raw bipolar EMG samples as the inputs, and the performance was compared with linear discriminant analysis (LDA) and stacked sparse autoencoders with features (SSAE-f) and raw samples (SSAE-r) as inputs. CNN outperformed (lower classification error) both LDA and SSAE-r in the within-session, between sessions on same day, between the pair of days, and leave-out one-day evaluation (p < 0.001) analyses. However, no significant difference was found between CNN and SSAE-f. These results demonstrated that CNN significantly improved performance and increased robustness over time compared with standard LDA with associated handcrafted features. This data-driven features extraction approach may overcome the problem of the feature calibration and selection in myoelectric control.

153 citations

Journal ArticleDOI
TL;DR: A novel coplanar waveguide-fed rectenna with high efficiency is proposed and implemented in this paper for 2.45-GHz Bluetooth/ wireless local area network applications.
Abstract: A novel coplanar waveguide-fed rectenna with high efficiency is proposed and implemented in this paper for 2.45-GHz Bluetooth/ wireless local area network applications. The antenna has compact dimensions of 18 mm $\times \,\, 30$ mm, which is simulated and manufactured using a low-cost FR4 substrate with a thickness of 1.6 mm. A tuning stub technique with rectangular slots is used for better impedance matching and enhancing the impedance bandwidth of the antenna with a peak gain of 5.6 dB. The proposed novel antenna for RF energy harvesting applications exhibits dipolelike radiation pattern in $H$ -plane and omnidirectional pattern in $E$ -plane with improved radiation efficiency. Single-stage Cockcroft–Walton rectifier with L-shaped impedance-matching network is designed in advance design system and fabricated on FR4 substrate. The dc output of the rectenna is measured as 3.24 V with a load resistance of 5 $\text{k}\Omega $ . A simulated peak conversion efficiency of 75.5% is attained, whereas the measured one is observed to be 68% with an input signal power of 5 dBm at 2.45 GHz.

97 citations

Journal ArticleDOI
TL;DR: Results indicate that within day performances of classifiers may be similar but over time, it may lead to a substantially different outcome, and training ANN on multiple days might allow capturing time-dependent variability in the EMG signals and thus minimizing the necessity for daily system recalibration.
Abstract: Currently, most of the adopted myoelectric schemes for upper limb prostheses do not provide users with intuitive control. Higher accuracies have been reported using different classification algorithms but investigation on the reliability over time for these methods is very limited. In this study, we compared for the first time the longitudinal performance of selected state-of-the-art techniques for electromyography (EMG) based classification of hand motions. Experiments were conducted on ten able-bodied and six transradial amputees for seven continuous days. Linear discriminant analysis (LDA), artificial neural network (ANN), support vector machine (SVM), K-nearest neighbor (KNN), and decision trees (TREE) were compared. Comparative analysis showed that the ANN attained highest classification accuracy followed by LDA. Three-way repeated ANOVA test showed a significant difference (P < 0.001) between EMG types (surface, intramuscular, and combined), days (1–7), classifiers, and their interactions. Performance on the last day was significantly better (P < 0.05) than the first day for all classifiers and EMG types. Within-day, classification error (WCE) across all subject and days in ANN was: surface (9.12 ± 7.38%), intramuscular (11.86 ± 7.84%), and combined (6.11 ± 7.46%). The between-day analysis in a leave-one-day-out fashion showed that the ANN was the optimal classifier (surface (21.88 ± 4.14%), intramuscular (29.33 ± 2.58%), and combined (14.37 ± 3.10%). Results indicate that within day performances of classifiers may be similar but over time, it may lead to a substantially different outcome. Furthermore, training ANN on multiple days might allow capturing time-dependent variability in the EMG signals and thus minimizing the necessity for daily system recalibration.

81 citations

Journal ArticleDOI
13 Mar 2020-Sensors
TL;DR: An automated method for segmenting lesion boundaries that combines two architectures, the U-Net and the ResNet, collectively called Res-Unet is proposed, which achieves comparable results to the current available state-of-the-art techniques.
Abstract: Clinical treatment of skin lesion is primarily dependent on timely detection and delimitation of lesion boundaries for accurate cancerous region localization. Prevalence of skin cancer is on the higher side, especially that of melanoma, which is aggressive in nature due to its high metastasis rate. Therefore, timely diagnosis is critical for its treatment before the onset of malignancy. To address this problem, medical imaging is used for the analysis and segmentation of lesion boundaries from dermoscopic images. Various methods have been used, ranging from visual inspection to the textural analysis of the images. However, accuracy of these methods is low for proper clinical treatment because of the sensitivity involved in surgical procedures or drug application. This presents an opportunity to develop an automated model with good accuracy so that it may be used in a clinical setting. This paper proposes an automated method for segmenting lesion boundaries that combines two architectures, the U-Net and the ResNet, collectively called Res-Unet. Moreover, we also used image inpainting for hair removal, which improved the segmentation results significantly. We trained our model on the ISIC 2017 dataset and validated it on the ISIC 2017 test set as well as the PH2 dataset. Our proposed model attained a Jaccard Index of 0.772 on the ISIC 2017 test set and 0.854 on the PH2 dataset, which are comparable results to the current available state-of-the-art techniques.

72 citations

Journal ArticleDOI
TL;DR: Simulations are performed to assess the proposed protocols and the results indicate that the three schemes largely minimize end-to-end delay along with improving the transmission loss of network.
Abstract: Underwater Acoustic Sensor Networks (UASNs) offer their practicable applications in seismic monitoring, sea mine detection, and disaster prevention. In these networks, fundamental difference between operational methodologies of routing schemes arises due to the requirement of time-critical applications; therefore, there is a need for the design of delay-sensitive techniques. In this paper, Delay-Sensitive Depth-Based Routing (DSDBR), Delay-Sensitive Energy Efficient Depth-Based Routing (DSEEDBR), and Delay-Sensitive Adaptive Mobility of Courier nodes in Threshold-optimized Depth-based routing (DSAMCTD) protocols are proposed to empower the depth-based routing schemes. The performance of the proposed schemes is validated in UASNs. All of the three schemes formulate delay-efficient Priority Factors (PF) and Delay-Sensitive Holding time (DSHT) to minimize end-to-end delay with a small decrease in network throughput. These schemes also employ an optimal weight function (WF) for the computation of transmission loss and speed of received signal. Furthermore, solution for delay lies in efficient data forwarding, minimal relative transmissions in low-depth region, and better forwarder selection. Simulations are performed to assess the proposed protocols and the results indicate that the three schemes largely minimize end-to-end delay along with improving the transmission loss of network.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal Article
TL;DR: In this paper, two major figures in adaptive control provide a wealth of material for researchers, practitioners, and students to enhance their work through the information on many new theoretical developments, and can be used by mathematical control theory specialists to adapt their research to practical needs.
Abstract: This book, written by two major figures in adaptive control, provides a wealth of material for researchers, practitioners, and students. While some researchers in adaptive control may note the absence of a particular topic, the book‘s scope represents a high-gain instrument. It can be used by designers of control systems to enhance their work through the information on many new theoretical developments, and can be used by mathematical control theory specialists to adapt their research to practical needs. The book is strongly recommended to anyone interested in adaptive control.

1,814 citations

Journal ArticleDOI
TL;DR: This paper presents the IoT technology from a bird's eye view covering its statistical/architectural trends, use cases, challenges and future prospects, and discusses challenges in the implementation of 5G-IoT due to high data-rates requiring both cloud-based platforms and IoT devices based edge computing.
Abstract: The Internet of Things (IoT)-centric concepts like augmented reality, high-resolution video streaming, self-driven cars, smart environment, e-health care, etc. have a ubiquitous presence now. These applications require higher data-rates, large bandwidth, increased capacity, low latency and high throughput. In light of these emerging concepts, IoT has revolutionized the world by providing seamless connectivity between heterogeneous networks (HetNets). The eventual aim of IoT is to introduce the plug and play technology providing the end-user, ease of operation, remotely access control and configurability. This paper presents the IoT technology from a bird’s eye view covering its statistical/architectural trends, use cases, challenges and future prospects. The paper also presents a detailed and extensive overview of the emerging 5G-IoT scenario. Fifth Generation (5G) cellular networks provide key enabling technologies for ubiquitous deployment of the IoT technology. These include carrier aggregation, multiple-input multiple-output (MIMO), massive-MIMO (M-MIMO), coordinated multipoint processing (CoMP), device-to-device (D2D) communications, centralized radio access network (CRAN), software-defined wireless sensor networking (SD-WSN), network function virtualization (NFV) and cognitive radios (CRs). This paper presents an exhaustive review for these key enabling technologies and also discusses the new emerging use cases of 5G-IoT driven by the advances in artificial intelligence, machine and deep learning, ongoing 5G initiatives, quality of service (QoS) requirements in 5G and its standardization issues. Finally, the paper discusses challenges in the implementation of 5G-IoT due to high data-rates requiring both cloud-based platforms and IoT devices based edge computing.

591 citations