scispace - formally typeset
Search or ask a question
Author

Benyamin Allahgholizadeh Haghi

Bio: Benyamin Allahgholizadeh Haghi is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Recurrent neural network & Artificial neural network. The author has an hindex of 4, co-authored 7 publications receiving 75 citations. Previous affiliations of Benyamin Allahgholizadeh Haghi include Sharif University of Technology & Google.

Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes an efficient hardware architecture to implement gradient boosted trees in applications under stringent power, area, and delay constraints, such as medical devices, and introduces the concepts of asynchronous tree operation and sequential feature extraction to achieve an unprecedented energy and area efficiency.
Abstract: Biomedical applications often require classifiers that are both accurate and cheap to implement. Today, deep neural networks achieve the state-of-the-art accuracy in most learning tasks that involve large data sets of unstructured data. However, the application of deep learning techniques may not be beneficial in problems with limited training sets and computational resources, or under domain-specific test time constraints. Among other algorithms, ensembles of decision trees, particularly the gradient boosted models have recently been very successful in machine learning competitions. Here, we propose an efficient hardware architecture to implement gradient boosted trees in applications under stringent power, area, and delay constraints, such as medical devices. Specifically, we introduce the concepts of asynchronous tree operation and sequential feature extraction to achieve an unprecedented energy and area efficiency. The proposed architecture is evaluated in automated seizure detection for epilepsy, using 3074 h of intracranial EEG data from 26 patients with 393 seizures. Average F1 scores of 99.23% and 87.86% are achieved for random and block-wise splitting of data into train/test sets, respectively, with an average detection latency of 1.1 s. The proposed classifier is fabricated in a 65-nm TSMC process, consuming 41.2 nJ/class in a total area of $540\times 1850\,\,\mathrm {\mu m}^{2}$ . This design improves the state-of-the-art by $27\times $ reduction in energy-area-latency product. Moreover, the proposed gradient-boosting architecture offers the flexibility to accommodate variable tree counts specific to each patient, to trade the predictive accuracy with energy. This patient-specific and energy-quality scalable classifier holds great promise for low-power sensor data classification in biomedical applications.

87 citations

Proceedings ArticleDOI
05 Nov 2015
TL;DR: This paper demonstrates breast tumor classification based on TA imaging based on a finite-difference time-domain (FDTD) simulation framework and proposes to use the interferogram of received pressure waves as the feature basis used for classification.
Abstract: Microwave-induced thermoacoustic (TA) imaging combines the dielectric/conductivity contrast in the microwave range with the high resolution of ultrasound imaging. Lack of ionizing radiation exposure in TA imaging makes this technique suitable for frequent screening applications, as with breast cancer screening. In this paper we demonstrate breast tumor classification based on TA imaging. The sensitivity of the signal-based classification algorithm to errors in the estimation of tumor locations is investigated. To reduce this sensitivity, we propose to use the interferogram of received pressure waves as the feature basis used for classification, and demonstrate the robustness based on a finite-difference time-domain (FDTD) simulation framework.

11 citations

Proceedings ArticleDOI
20 Mar 2019
TL;DR: This work describes a BMI system using electrodes implanted in the parietal lobe of a tetraplegic subject and compares performance for four different algorithms: Kalman filter, a two-layer Deep Neural Network, a Recurrent Neural Network with SimpleRNN unit cell (SimpleRNN), and a RNN with Long-Short-Term Memory (LSTM) unit cell.
Abstract: Brain-machine interfaces have shown promising results in providing control over assistive devices for paralyzed patients. In this work we describe a BMI system using electrodes implanted in the parietal lobe of a tetraplegic subject. Neural data used for the decoding was recorded in five 3-minute blocks during the same session. Within each block, the subject uses motor imagery to control a cursor in a 2D center-out task. We compare performance for four different algorithms: Kalman filter, a two-layer Deep Neural Network (DNN), a Recurrent Neural Network (RNN) with SimpleRNN unit cell (SimpleRNN), and a RNN with Long-Short-Term Memory (LSTM) unit cell. The decoders achieved Pearson Correlation Coefficients (ρ) of 0.48, 0.39, 0.77 and 0.75, respectively, in the Y-coordinate, and 0.24, 0.20, 0.46 and 0.47, respectively, in the X-coordinate.

10 citations

Proceedings ArticleDOI
01 Jul 2018
TL;DR: The proposed system-on-chip (SoC) breaks the strict energy-area-delay trade-off by employing area and memoryefficient techniques and achieves 27 × improvement in Energy-AreaLatency product.
Abstract: A 41.2 nJ/class, 32-channel, patient-specific onchip classification architecture for epileptic seizure detection is presented. The proposed system-on-chip (SoC) breaks the strict energy-area-delay trade-off by employing area and memoryefficient techniques. An ensemble of eight gradient-boosted decision trees, each with a fully programmable Feature Extraction Engine (FEE) and FIR filters are continuously processing the input channels. In a closed-loop architecture, the FEE reuses a single filter structure to execute the top-down flow of the decision tree. FIR filter coefficients are multiplexed from a shared memory. The 540 × 1850 μm2 prototype with a 1kB register-type memory is fabricated in a TSMC 65nm CMOS process. The proposed on-chip classifier is verified on 2253 hours of intracranial EEG (iEEG) data from 20 patients including 361 seizures, and achieves specificity of 88.1% and sensitivity of 83.7%. Compared to the state-of-the-art, the proposed classifier achieves 27 × improvement in Energy-AreaLatency product.

8 citations

Journal ArticleDOI
TL;DR: This study critically review and compare recent ECG clustering techniques, discusses their applications and limitations, and presents the necessary information required to adopt the appropriate algorithm for a specific application.
Abstract: Electrocardiography is the gold standard technique for detecting abnormal heart conditions. Automatic detection of electrocardiogram (ECG) abnormalities helps clinicians analyze the large amount of data produced daily by cardiac monitors. As thenumber of abnormal ECG samples with cardiologist-supplied labels required to train supervised machine learning models is limited, there is a growing need for unsupervised learning methods for ECG analysis. Unsupervised learning aims to partition ECG samples into distinct abnormality classes without cardiologist-supplied labels–a process referred to as ECG clustering. In addition to abnormality detection, ECG clustering has recently discovered inter and intra-individual patterns that reveal valuable information about the whole body and mind, such as emotions, mental disorders, and metabolic levels. ECG clustering can also resolve specific challenges facing supervised learning systems, such as the imbalanced data problem, and can enhance biometric systems. While several reviews exist on supervised ECG systems, a comprehensive review of unsupervised ECG analysis techniques is still lacking. This study reviews ECG clustering techniques developed mainly in the last decade. The focus will be on recent machine learning and deep learning algorithms and their practical applications. We critically review and compare these techniques, discuss their applications and limitations, and provide future research directions. This review provides further insights into ECG clustering and presents the necessary information required to adopt the appropriate algorithm for a specific application.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Six machine learning models are used to predict the daily carbon price and trading volume of eight carbon markets in China, including Beijing, Shenzhen, Guangdong, Hubei, Shanghai, Fujian, Tianjin, Chongqing, and an advanced data denoising method is used in the models to smooth the raw data.

140 citations

Journal ArticleDOI
TL;DR: This work proposes an efficient hardware architecture to implement gradient boosted trees in applications under stringent power, area, and delay constraints, such as medical devices, and introduces the concepts of asynchronous tree operation and sequential feature extraction to achieve an unprecedented energy and area efficiency.
Abstract: Biomedical applications often require classifiers that are both accurate and cheap to implement. Today, deep neural networks achieve the state-of-the-art accuracy in most learning tasks that involve large data sets of unstructured data. However, the application of deep learning techniques may not be beneficial in problems with limited training sets and computational resources, or under domain-specific test time constraints. Among other algorithms, ensembles of decision trees, particularly the gradient boosted models have recently been very successful in machine learning competitions. Here, we propose an efficient hardware architecture to implement gradient boosted trees in applications under stringent power, area, and delay constraints, such as medical devices. Specifically, we introduce the concepts of asynchronous tree operation and sequential feature extraction to achieve an unprecedented energy and area efficiency. The proposed architecture is evaluated in automated seizure detection for epilepsy, using 3074 h of intracranial EEG data from 26 patients with 393 seizures. Average F1 scores of 99.23% and 87.86% are achieved for random and block-wise splitting of data into train/test sets, respectively, with an average detection latency of 1.1 s. The proposed classifier is fabricated in a 65-nm TSMC process, consuming 41.2 nJ/class in a total area of $540\times 1850\,\,\mathrm {\mu m}^{2}$ . This design improves the state-of-the-art by $27\times $ reduction in energy-area-latency product. Moreover, the proposed gradient-boosting architecture offers the flexibility to accommodate variable tree counts specific to each patient, to trade the predictive accuracy with energy. This patient-specific and energy-quality scalable classifier holds great promise for low-power sensor data classification in biomedical applications.

87 citations

Journal ArticleDOI
28 Jun 2021-Sensors
TL;DR: A comprehensive overview of key application areas of EML technology is given, point out key research directions and highlight key take-away lessons for future research exploration in the embedded machine learning domain.
Abstract: Embedded systems technology is undergoing a phase of transformation owing to the novel advancements in computer architecture and the breakthroughs in machine learning applications. The areas of applications of embedded machine learning (EML) include accurate computer vision schemes, reliable speech recognition, innovative healthcare, robotics, and more. However, there exists a critical drawback in the efficient implementation of ML algorithms targeting embedded applications. Machine learning algorithms are generally computationally and memory intensive, making them unsuitable for resource-constrained environments such as embedded and mobile devices. In order to efficiently implement these compute and memory-intensive algorithms within the embedded and mobile computing space, innovative optimization techniques are required at the algorithm and hardware levels. To this end, this survey aims at exploring current research trends within this circumference. First, we present a brief overview of compute intensive machine learning algorithms such as hidden Markov models (HMM), k-nearest neighbors (k-NNs), support vector machines (SVMs), Gaussian mixture models (GMMs), and deep neural networks (DNNs). Furthermore, we consider different optimization techniques currently adopted to squeeze these computational and memory-intensive algorithms within resource-limited embedded and mobile environments. Additionally, we discuss the implementation of these algorithms in microcontroller units, mobile devices, and hardware accelerators. Conclusively, we give a comprehensive overview of key application areas of EML technology, point out key research directions and highlight key take-away lessons for future research exploration in the embedded machine learning domain.

66 citations

Journal ArticleDOI
TL;DR: This review explores topics ranging from signal acquisition analog circuits to classification algorithms and dedicated digital signal processing circuits for detection and prediction purposes, to provide a comprehensive and useful guideline for the construction, implementation and optimization of wearable and integrated smart seizure prediction systems.
Abstract: Recent review papers have investigated seizure prediction, creating the possibility of preempting epileptic seizures. Correct seizure prediction can significantly improve the standard of living for the majority of epileptic patients, as the unpredictability of seizures is a major concern for them. Today, the development of algorithms, particularly in the field of machine learning, enables reliable and accurate seizure prediction using desktop computers. However, despite extensive research effort being devoted to developing seizure detection integrated circuits (ICs), dedicated seizure prediction ICs have not been developed yet. We believe that interdisciplinary study of system architecture, analog and digital ICs, and machine learning algorithms can promote the translation of scientific theory to a more realistic intelligent, integrated, and low-power system that can truly improve the standard of living for epileptic patients. This review explores topics ranging from signal acquisition analog circuits to classification algorithms and dedicated digital signal processing circuits for detection and prediction purposes, to provide a comprehensive and useful guideline for the construction, implementation and optimization of wearable and integrated smart seizure prediction systems.

56 citations

Journal ArticleDOI
24 Jan 2020-Entropy
TL;DR: The extensive experimental results indicated that the proposed CEEMD-XGBoost can significantly enhance the detection performance of epileptic seizures in terms of sensitivity, specificity, and accuracy.
Abstract: Epilepsy is a common nervous system disease that is characterized by recurrent seizures. An electroencephalogram (EEG) records neural activity, and it is commonly used for the diagnosis of epilepsy. To achieve accurate detection of epileptic seizures, an automatic detection approach of epileptic seizures, integrating complementary ensemble empirical mode decomposition (CEEMD) and extreme gradient boosting (XGBoost), named CEEMD-XGBoost, is proposed. Firstly, the decomposition method, CEEMD, which is capable of effectively reducing the influence of mode mixing and end effects, was utilized to divide raw EEG signals into a set of intrinsic mode functions (IMFs) and residues. Secondly, the multi-domain features were extracted from raw signals and the decomposed components, and they were further selected according to the importance scores of the extracted features. Finally, XGBoost was applied to develop the epileptic seizure detection model. Experiments were conducted on two benchmark epilepsy EEG datasets, named the Bonn dataset and the CHB-MIT (Children's Hospital Boston and Massachusetts Institute of Technology) dataset, to evaluate the performance of our proposed CEEMD-XGBoost. The extensive experimental results indicated that, compared with some previous EEG classification models, CEEMD-XGBoost can significantly enhance the detection performance of epileptic seizures in terms of sensitivity, specificity, and accuracy.

56 citations