scispace - formally typeset
Search or ask a question

Showing papers by "Keshab K. Parhi published in 2016"


Journal ArticleDOI
TL;DR: A novel patient-specific algorithm for prediction of seizures in epileptic patients from either one or two single-channel or bipolar channel intra-cranial or scalp electroencephalogram (EEG) recordings with low hardware complexity.
Abstract: Prediction of seizures is a difficult problem as the EEG patterns are not wide-sense stationary and change from seizure to seizure, electrode to electrode, and from patient to patient. This paper presents a novel patient-specific algorithm for prediction of seizures in epileptic patients from either one or two single-channel or bipolar channel intra-cranial or scalp electroencephalogram (EEG) recordings with low hardware complexity. Spectral power features are extracted and their ratios are computed. For each channel, a total of 44 features including 8 absolute spectral powers, 8 relative spectral powers and 28 spectral power ratios are extracted every two seconds using a 4-second window with a 50% overlap. These features are then ranked and selected in a patient-specific manner using a two-step feature selection. Selected features are further processed by a second-order Kalman filter and then input to a linear support vector machine (SVM) classifier. The algorithm is tested on the intra-cranial EEG (iEEG) from the Freiburg database and scalp EEG (sEEG) from the MIT Physionet database. The Freiburg database contains 80 seizures among 18 patients in 427 hours of recordings. The MIT EEG database contains 78 seizures from 17 children in 647 hours of recordings. It is shown that the proposed algorithm can achieve a sensitivity of 100% and an average false positive rate (FPR) of 0.0324 per hour for the iEEG (Freiburg) database and a sensitivity of 98.68% and an average FPR of 0.0465 per hour for the sEEG (MIT) database. These results are obtained with leave-one-out cross-validation where the seizure being tested is always left out from the training set. The proposed algorithm also has a low complexity as the spectral powers can be computed using FFT. The area and power consumption of the proposed linear SVM are 2 to 3 orders of magnitude less than a radial basis function kernel SVM (RBF-SVM) classifier. Furthermore, the total energy consumption of a system using linear SVM is reduced by 8% to 23% compared to system using RBF-SVM.

171 citations


Journal ArticleDOI
TL;DR: A novel classification-based optic disc segmentation algorithm that detects the OD boundary and the location of vessel origin (VO) pixel and can be used for automated detection of retinal pathologies, such as glaucoma, diabetic retinopathy, and maculopathy is presented.
Abstract: This paper presents a novel classification-based optic disc (OD) segmentation algorithm that detects the OD boundary and the location of vessel origin (VO) pixel. First, the green plane of each fundus image is resized and morphologically reconstructed using a circular structuring element. Bright regions are then extracted from the morphologically reconstructed image that lie in close vicinity of the major blood vessels. Next, the bright regions are classified as bright probable OD regions and non-OD regions using six region-based features and a Gaussian mixture model classifier. The classified bright probable OD region with maximum Vessel-Sum and Solidity is detected as the best candidate region for the OD. Other bright probable OD regions within 1-disc diameter from the centroid of the best candidate OD region are then detected as remaining candidate regions for the OD. A convex hull containing all the candidate OD regions is then estimated, and a best-fit ellipse across the convex hull becomes the segmented OD boundary. Finally, the centroid of major blood vessels within the segmented OD boundary is detected as the VO pixel location. The proposed algorithm has low computation time complexity and it is robust to variations in image illumination, imaging angles, and retinal abnormalities. This algorithm achieves 98.8%–100% OD segmentation success and OD segmentation overlap score in the range of 72%–84% on images from the six public datasets of DRIVE, DIARETDB1, DIARETDB0, CHASE_DB1, MESSIDOR, and STARE in less than 2.14 s per image. Thus, the proposed algorithm can be used for automated detection of retinal pathologies, such as glaucoma, diabetic retinopathy, and maculopathy.

98 citations


Journal ArticleDOI
TL;DR: The abnormal topological properties and connectivity found in this study may add new knowledge to the current understanding of functional brain networks in BPD, but due to limitation of small sample sizes, the results should be viewed as exploratory and need to be validated on large samples in future works.

84 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that, despite feedback in IIR filters, these filters can be implemented using stochastic logic, and hardware synthesis results show that these filter structures require lower hardware area and power compared to two's complement realizations.
Abstract: This paper addresses implementation of digital IIR filters using stochastic computing. Stochastic computing requires fewer logic gates and is inherently fault-tolerant. Thus, these structures are well suited for nanoscale CMOS technologies. While it is easy to realize FIR filters using stochastic computing, implementation of IIR digital filters is non-trivial. Stochastic logic assumes independence of input signals; however, feedback in IIR digital filters leads to correlation of input signals, and the independence assumption is violated. This paper demonstrates that, despite feedback in IIR filters, these filters can be implemented using stochastic logic. The key to stochastic implementation is selection of an IIR filter structure where the states are orthogonal and are, therefore, uncorrelated. Two categories of architectures are presented for stochastic IIR digital filters. One category is based on the basic lattice filter representation where the states are orthogonal, and the other is based on the normalized lattice filter representation where states are orthonormal . For each category, three stochastic implementations are introduced. The first is based on a state-space description of the IIR filter derived from the lattice filter structure. The second is based on transforming the lattice IIR digital filter into an equivalent form that can exploit the novel scaling approach developed for inner product computations. The third is optimized stochastic implementation with reduced number of binary multipliers. Simulation results demonstrate high signal-to-error ratio and fault tolerance in these structures. Furthermore, hardware synthesis results show that these filter structures require lower hardware area and power compared to two's complement realizations.

51 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: This paper explores the use of Pearson's correlation scores and network based features to predict if a subject has OCD and achieves 80% accuracy with 81% sensitivity and 77% specificity.
Abstract: Obssesive-compulsive disorder (OCD) is a serious mental illness that affects the overall quality of the patients' daily lives. Accurate diagnosis of this disorder is a primary step towards effective treatment. Diagnosing OCD is a lengthy procedure that involves interviews, symptom rating scales and behavioral observation as well as the experience of a clinician. Discovering signal processing and network based biomarkers from functional magnetic resonance scans of patients may greatly assist the clinicians in their diagnostic assessments. In this paper, we explore the use of Pearson's correlation scores and network based features to predict if a subject has OCD. We extracted mean time series from 112 brain regions and decomposed them to 5-frequency bands. The mean time courses were used to calculate the Pearson's correlation matrix and network based features for each band. Minimum redundancy maximum relevance feature selection method is applied to the Pearson's correlation matrix and network based features from each frequency band to select the best features for the final predictor. A leave-one-out cross validation method is used for the final predictor performance. Our proposed methodology achieves 80% accuracy (23 out of 29 subjects classified correctly) with 81% sensitivity(13 out of 16 OCD subjects identified correctly) and 77% specificity (10 out of 13 controls identified correctly) using leave-one-out with in-fold feature ranking and selection. The most discriminating feature bands are 0.06–0.11 Hz for Pearson's correlation and 0.03–0.06 Hz for network based features. The high classification accuracy indicates the predictive power of the network features as well as carefully chosen Pearson's correlation values.

31 citations


Proceedings ArticleDOI
22 May 2016
TL;DR: The approaches proposed in this work provide a potential low-cost solution for stochastic BP decoder design, and several methods ranging from algorithm level to architecture level are presented to improve the error and hardware performances of the stochastically BP decode.
Abstract: Polar codes have become one of the most attractive topics in coding theory community because of their provable capacity-achieving property. Belief propagation (BP) algorithm, as one o f the popular approaches for decoding polar codes, has unique advantage of high parallelism but suffers from high computation complexity, which translates to very large silicon area and high power consumption. This paper, for the first time, exploits the design of polar BP decoder using stochastic computing. Several methods ranging from algorithm level to architecture level are presented to improve the error and hardware performances of the stochastic BP decoder. The approaches proposed in this work provide a potential low-cost solution for stochastic BP decoder design.

26 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: A novel method that classifies neovascularizations in the 1-optic disc (OD) diameter region (NVD) and elsewhere (NVE) separately to achieve low false positive rates of neov vascularization classification can play a key role in automated screening and prioritization of patients with diabetic retinopathy.
Abstract: Neovascularization is the primary manifestation of proliferative diabetic retinopathy (PDR) that can lead to acquired blindness. This paper presents a novel method that classifies neovascularizations in the 1-optic disc (OD) diameter region (NVD) and elsewhere (NVE) separately to achieve low false positive rates of neovascularization classification. First, the OD region and blood vessels are extracted. Next, the major blood vessel segments in the 1-OD diameter region are classified for NVD, and minor blood vessel segments elsewhere are classified for NVE. For NVD and NVE classifications, optimal region-based feature sets of 10 and 6 features, respectively, are used. The proposed method achieves classification sensitivity, specificity and accuracy for NVD and NVE of 74%, 98.2%, 87.6%, and 61%, 97.5%, 92.1%, respectively. Also, the proposed method achieves 86.4% sensitivity and 76% specificity for screening images with PDR from public and local data sets. Thus, the proposed NVD and NVE detection methods can play a key role in automated screening and prioritization of patients with diabetic retinopathy.

23 citations


Proceedings ArticleDOI
08 Aug 2016
TL;DR: This work proposes several enhanced thresholding strategies for determining stable CRPs and shows a high degree of uniqueness and randomness in the PUF responses which can be attributed to the carefully optimized circuit layout.
Abstract: In this work, we present probability based response generation schemes for MUX based Physical Unclonable Functions (PUFs). Compared to previous implementations where temporal majority voting (TMV) based on limited samples and coarse criteria was utilized to determine final responses, our design can collect soft responses with detailed probability information using simple on-chip circuits. Thresholds with fine accuracy are applied to efficiently distinguish stable and unstable challenge response pairs (CRPs). A 32nm test chip including both linear and feed-forward MUX PUFs was implemented for concept verification. Based on a detailed analysis of the hardware data, we propose several enhanced thresholding strategies for determining stable CRPs. For instance, a stringent threshold can be imposed in enrollment phase for selecting good CRPs, while a relaxed threshold can be used during normal authentication phase. Experimental data shows a high degree of uniqueness and randomness in the PUF responses which can be attributed to the carefully optimized circuit layout. Finally, output characteristic of a feed-forward MUX PUF was compared to that of a standard linear MUX PUF from the same 32nm chip.

20 citations


Proceedings ArticleDOI
01 Aug 2016
TL;DR: A novel approach to vessel classification to compute the artery/vein ratio (AVR) for all blood vessel segments in the fundus image is presented and is robust due to the feature selection procedure, and is possible to get similar accuracies across many datasets.
Abstract: Automated classification of retinal vessels in fundus images is the first step towards measurement of retinal characteristics that can be used to screen and diagnose vessel abnormalities for cardiovascular and retinal disorders. This paper presents a novel approach to vessel classification to compute the artery/vein ratio (AVR) for all blood vessel segments in the fundus image. The features extracted are then subjected to a selection procedure using Random Forests (RF) where the features that contribute most to classification accuracy are chosen as input to a polynomial kernel Support Vector Machine (SVM) classifier. The most dominant feature was found to be the vessel information obtained from the Light plane of the LAB color space. The SVM is then subjected to one time training using 10-fold cross validation on images randomly selected from the VICAVR dataset before testing on an independent test dataset, derived from the same database. An Area Under the ROC Curve (AUC) of 97.2% was obtained on an average of 100 runs of the algorithm. The proposed algorithm is robust due to the feature selection procedure, and it is possible to get similar accuracies across many datasets.

20 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: The accuracies of two architectures for radial basis function (RBF) kernel computation for support vector machine (SVM) classifier using stochastic logic are compared using support vectors from classification of electroencephalogram (EEG) signals for seizure prediction.
Abstract: This paper presents novel architectures for radial basis function (RBF) kernel computation for support vector machine (SVM) classifier using stochastic logic. Stochastic computing systems involve low hardware complexity and are inherently faulttolerant. Two types of architectures are presented. These include: an implementation with input and output both in bipolar format and an implementation with bipolar input and unipolar output. The computation of RBF kernel is comprised of the squared Euclidean distance and the exponential function. In the first implementation, two components are implemented in bipolar format and the exponential function is designed based on the finite state machine (FSM) method. The second implementation computes the squared Euclidean distance with bipolar input and unipolar output. The exponential function is implemented in unipolar format based on the Maclaurin expansion. The accuracies of two architectures are compared using support vectors from classification of electroencephalogram (EEG) signals for seizure prediction. From simulation results, it is shown that the computational error of the second stochastic implementation with format conversion is 24.90% less than that of the first implementation in bipolar format.

20 citations


Proceedings ArticleDOI
14 Mar 2016
TL;DR: A novel approach to estimate delay differences of each stage in a standard MUX-based physical unclonable function (PUF) and confirms that the delay Differences of all stages of the PUFs on the same chip belong to the same Gaussian probability density function.
Abstract: This paper presents a novel approach to estimate delay differences of each stage in a standard MUX-based physical unclonable function (PUF). Test data collected from PUFs fabricated using 32 nm process are used to train a linear model. The delay differences of the stages directly correspond to the model parameters. These parameters are trained by using a least mean square (LMS) adaptive algorithm. The accuracy of the response using the proposed model is around 97.5% and 99.5% for two different PUFs. Second, the PUF is also modeled by a perceptron. The perceptron has almost 100% classification accuracy. A comparison shows that the perceptron model parameters are scaled versions of the model derived by the LMS algorithm. Thus, the delay differences can be estimated from the perceptron model where the scaling factor is computed by comparing the models of the LMS algorithm and the perceptron. Because the delay differences are challenge independent, these parameters can be stored on the server. This will enable the server to issue random challenges whose responses need not be stored. An analysis of the proposed model confirms that the delay differences of all stages of the PUFs on the same chip belong to the same Gaussian probability density function.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: Two approaches are proposed to implementing tangent hyperbolic and sigmoid functions in unipolar stochastic logic based on a JK flip-flop or a general unipolar division, which involves format conversion from bipolar format to unipolar format.
Abstract: This paper addresses implementations of tangent hyperbolic and sigmoid functions using stochastic logic. Stochastic computing requires simple logic gates and is inherently fault-tolerant. Thus, these structures are well suited for nanoscale CMOS technologies. Tangent hyperbolic and sigmoid functions are widely used in machine learning systems such as neural networks. This paper makes two major contributions. First, two approaches are proposed to implementing tangent hyperbolic and sigmoid functions in unipolar stochastic logic. The first approach is based on a JK flip-flop. In the second approach, the proposed designs are based on a general unipolar division. Second, we present two approaches to computing tangent hyperbolic and sigmoid functions in bipolar stochastic logic. The first approach involves format conversion from bipolar format to unipolar format. The second approach uses a general bipolar stochastic divider. Simulation and synthesis results are presented for proposed designs.

Journal ArticleDOI
TL;DR: In this paper, a general log-likelihood-ratio (LLR)-based SCL decoding algorithm with multi-bit decision was proposed, which can determine 2K bits simultaneously for arbitrary K with the use of LLR messages.
Abstract: Due to their capacity-achieving property, polar codes have become one of the most attractive channel codes. To date, the successive cancellation list (SCL) decoding algorithm is the primary approach that can guarantee outstanding error-correcting performance of polar codes. However, the hardware designs of the original SCL decoder have large silicon area and long decoding latency. Although some recent efforts can reduce either the area or latency of SCL decoders, these two metrics still cannot be optimized at the same time. This paper, for the first time, proposes a general log-likelihood-ratio (LLR)-based SCL decoding algorithm with multi-bit decision. This new algorithm, referred as LLR-2Kb-SCL, can determine 2K bits simultaneously for arbitrary K with the use of LLR messages. In addition, a reduced-data-width scheme is presented to reduce the critical path of the sorting block. Then, based on the proposed algorithm, a VLSI architecture of the new SCL decoder is developed. Synthesis results show that for an example (1024, 512) polar code with list size 4, the proposed LLR-2Kb-SCL decoders achieve significant reduction in both area and latency as compared to prior works. As a result, the hardware efficiency of the proposed designs with K=2 and 3 are 2.33 times and 3.32 times of that of the state-of-the-art works, respectively.

Proceedings ArticleDOI
01 Nov 2016
TL;DR: This paper presents a general approach to synthesize correlated stochastic bit streams for specified probabilities and specified correlation coefficients and shows that these match with the theoretical pdfs of the outputs.
Abstract: Stochastic computing using simple logic circuits requires significantly less area and consumes less power compared to traditional computing systems. These circuits are also inherently fault-tolerant. The main drawbacks of these systems include long latency and inexactness in computing. The deviation from exact values increases as the correlation among inputs increases. In many applications, outputs from different sensors may be correlated. Thus, testing correctness of stochastic computing circuits requires generation of correlated stochastic bit streams. While uncorrelated bit streams can be generated using linear feedback shift registers (LFSRs), generation of correlated stochastic bit streams has not yet been fully investigated. This paper presents a general approach to synthesize correlated stochastic bit streams for specified probabilities and specified correlation coefficients. Generation of N correlated stochastic bit streams requires N probabilities and 2N — N — 1 correlation coefficients. Using N LFSRs, N uncorrelated stochastic bit streams are first generated. The N correlated bit streams are then generated one at a time using conditional marginal probabilities. The method is illustrated for generating two and three correlated bit streams. The area and power overheads for two correlated bit streams are 9.09% and 2.12%, respectively, and for three correlated bit streams are 21.03% and 4.80%, respectively. The generated sequences are applied to simple stochastic logic gates and the probability density functions (pdfs) of the outputs are derived. It is shown that these match with the theoretical pdfs of the outputs.

Journal ArticleDOI
07 Apr 2016
TL;DR: In this paper, the spectral power of neural oscillations associated with word processing in schizophrenia was investigated using magnetoencephalography (MEG) data acquired from 12 schizophrenia patients and 10 healthy controls during a visual word processing task.
Abstract: This study investigated spectral power of neural oscillations associated with word processing in schizophrenia. Magnetoencephalography (MEG) data were acquired from 12 schizophrenia patients and 10 healthy controls during a visual word processing task. Two spectral power ratio (SPR) feature sets: the band power ratio (BPR) and the window power ratio (WPR) were extracted from MEG data in five frequency bands, four time windows of word processing, and at locations covering whole head. Cluster-based nonparametric permutation tests were employed to identify SPRs that show significant between-group difference. Machine learning based feature selection and classification techniques were then employed to select optimal combinations of the significant SPR features, and distinguish schizophrenia patients from healthy controls. We identified three BPR clusters and three WPR clusters that show significant oscillation power difference between groups. These include the theta/delta, alpha/delta and beta/delta BPRs during base-to-encode and encode time windows, and the beta band WPR from base to encode and from encode to post-encode windows. Based on two WPR and one BPR features combined, over 95% cross-validation classification accuracy was achieved using three different linear classifiers separately. These features may have potential as quantitative markers that discriminate schizophrenia patients and healthy controls; however, this needs further validation on larger samples.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: To improve the accuracy of proposed stochastic classifiers, a novel approach based on linear transformation of input data is proposed for EEG signal classification using linear SVM classifiers.
Abstract: This paper presents novel architectures for machine learning based classifiers using stochastic logic. Two types of classifier architectures are presented. These include: linear support vector machine (SVM) and artificial neural network (ANN). Stochastic computing systems require fewer logic gates and are inherently fault-tolerant. Thus, these structures are well suited for nanoscale CMOS technologies. These architectures are validated using seizure prediction from electroencephalogram (EEG) as an application example. To improve the accuracy of proposed stochastic classifiers, a novel approach based on linear transformation of input data is proposed for EEG signal classification using linear SVM classifiers. Simulation results in terms of the classification accuracy are presented for the proposed stochastic computing and the traditional binary implementations based datasets from one patient. Compared to conventional binary implementation, the accuracy of the proposed stochastic ANN is improved by 5.89%. Synthesis results are also presented for EEG signal classification. Compared to the traditional binary linear SVM, the hardware complexity, power consumption and critical path of the stochastic implementation are reduced by 78%, 74% and 53%, respectively. The hardware complexity, power consumption and critical path of the stochastic ANN classifier are reduced by 92%, 88% and 47%, respectively, compared to the conventional binary implementation.

Journal ArticleDOI
TL;DR: A model for the beat frequency detector--based high-speed TRNG (BFD-TRNG) is proposed, and the key contribution of the proposed approach lies in fitting the model to measured data and the ability to use themodel to predict performance of BFD- TRNGs that have not been fabricated.
Abstract: True random number generators (TRNGs) are crucial components for the security of cryptographic systems. In contrast to pseudo--random number generators (PRNGs), TRNGs provide higher security by extracting randomness from physical phenomena. To evaluate a TRNG, statistical properties of the circuit model and raw bitstream should be studied. In this article, a model for the beat frequency detector--based high-speed TRNG (BFD-TRNG) is proposed. The parameters of the model are extracted from the experimental data of a test chip. A statistical analysis of the proposed model is carried out to derive mean and variance of the counter values of the TRNG. Our statistical analysis results show that mean of the counter values is inversely proportional to the frequency difference of the two ring oscillators (ROSCs), whereas the dynamic range of the counter values increases linearly with standard deviation of environmental noise and decreases with increase of the frequency difference. Without the measurements from the test data, a model cannot be created; similarly, without a model, performance of a TRNG cannot be predicted. The key contribution of the proposed approach lies in fitting the model to measured data and the ability to use the model to predict performance of BFD-TRNGs that have not been fabricated. Several novel alternate BFD-TRNG architectures are also proposed; these include parallel BFD, cascade BFD, and parallel-cascade BFD. These TRNGs are analyzed using the proposed model, and it is shown that the parallel BFD structure requires less area per bit, whereas the cascade BFD structure has a larger dynamic range while maintaining the same mean of the counter values as the original BFD-TRNG. It is shown that 3.25M and 4M random bits can be obtained per counter value from parallel BFD and parallel-cascade BFD, respectively, where M counter values are computed in parallel. Furthermore, the statistical analysis results illustrate that BFD-TRNGs have better randomness and less cost per bit than other existing ROSC-TRNG designs. For example, it is shown that BFD-TRNGs accumulate 150p more jitter than the original two-oscillator TRNG and that parallel BFD-TRNGs require one-third power and one-half area for same number of random bits for a specified period.

BookDOI
TL;DR: A novel automated system that segments six sub-retinal layers from optical coherence tomography image stacks of healthy patients and patients with diabetic macular edema is presented, which is robust to disruptions in the retinal micro-structure due to DME.
Abstract: This paper presents a novel automated system that segments six sub-retinal layers from optical coherence tomography (OCT) image stacks of healthy patients and patients with diabetic macular edema (DME) First, each image in the OCT stack is denoised using a Wiener deconvolution algorithm that estimates the additive speckle noise variance using a novel Fourier-domain based structural error This denoising method enhances the image SNR by an average of 12dB Next, the denoised images are subjected to an iterative multi-resolution high-pass filtering algorithm that detects seven sub-retinal surfaces in six iterative steps The thicknesses of each sub-retinal layer for all scans from a particular OCT stack are then compared to the manually marked groundtruth The proposed system uses adaptive thresholds for denoising and segmenting each image and hence it is robust to disruptions in the retinal micro-structure due to DME The proposed denoising and segmentation system has an average error of 12-58 $\mu m$ and 35-26$\mu m$ for segmenting sub-retinal surfaces in normal and abnormal images with DME, respectively For estimating the sub-retinal layer thicknesses, the proposed system has an average error of 02-25 $\mu m$ and 18-18 $\mu m$ in normal and abnormal images, respectively Additionally, the average inner sub-retinal layer thickness in abnormal images is estimated as 275$\mu m (r=092)$ with an average error of 93 $\mu m$, while the average thickness of the outer layers in abnormal images is estimated as 574$\mu m (r=074)$ with an average error of 35 $\mu m$ The proposed system can be useful for tracking the disease progression for DME over a period of time

Proceedings ArticleDOI
01 Nov 2016
TL;DR: This paper presents a novel patient-specific algorithm for prediction of seizures in epileptic patients that extracts spectral power features, including relative spectral powers and spectral power ratios, and cross correlation coefficients between all pairs of electrodes as two independent feature sets.
Abstract: This paper presents a novel patient-specific algorithm for prediction of seizures in epileptic patients. Spectral power features, including relative spectral powers and spectral power ratios, and cross correlation coefficients between all pairs of electrodes, are extracted as two independent feature sets. Both feature sets are selected independently in a patient-specific manner by classification and regression tree (CART). Selected features are further processed by a second-order Kalman filter and then input independently to three different classifiers which include AdaBoost, radial basis function kernel support vector machine (RBF-SVM) and artificial neural network (ANN). The algorithm is tested on the intra-cranial EEG (iEEG) from the recent American Epilepsy Society Seizure Prediction Challenge database. Intracranial EEG was recorded from five dogs and two patients. These datasets have varying numbers of electrodes and are sampled at different sampling frequencies. It is shown that the spectral feature set achieves a mean AUC of 0.7538, 0.7739, and 0.7948 for AdaBoost, SVM, and ANN, respectively. The correlation coefficients feature set achieves a mean AUC of 0.6640, 0.7403, and 0.7875 for AdaBoost, SVM, and ANN, respectively. The combined best results which use patient-specific feature sets achieve a mean AUC of 0.7603, 0.8472, and 0.8884 for AdaBoost, SVM, and ANN, respectively.

Proceedings ArticleDOI
18 May 2016
TL;DR: An approach based on polynomial factorization is proposed to compute functions in unipolar stochastic logic, where functions are expressed using polynomials, which are derived from Taylor expansion or Lagrange interpolation.
Abstract: This paper addresses computing complex functions using unipolar stochastic logic. Stochastic computing requires simple logic gates and is inherently fault-tolerant. Thus, these structures are well suited for nanoscale CMOS technologies. Implementations of complex functions cost extremely low hardware complexity compared to traditional two's complement implementation. In this paper an approach based on polynomial factorization is proposed to compute functions in unipolar stochastic logic. In this approach, functions are expressed using polynomials, which are derived from Taylor expansion or Lagrange interpolation. Polynomials are implemented in stochastic logic by using factorization. Experimental results in terms of accuracy and hardware complexity are presented to compare the proposed designs of complex functions with previous implementations using Bernstein polynomials.

Proceedings ArticleDOI
20 Jan 2016
TL;DR: The idea of using desired and undesired modes to design obfuscated DSP functions is illustrated using the fast Fourier transform (FFT) as an example and the security of this approach is discussed.
Abstract: Hardware security has emerged as an important topic in the wake of increasing threats on integrated circuits which include reverse engineering, intellectual property (IP) piracy and overbuilding. This paper explores obfuscation of circuits as a hardware security measure and specifically targets digital signal processing (DSP) circuits which are part of most modern systems. The idea of using desired and undesired modes to design obfuscated DSP functions is illustrated using the fast Fourier transform (FFT) as an example. The selection of a mode is dependent on a key input to the circuit. The system is said to work in its desired mode of operation only if the correct key is applied. Other undesired modes are built into the design to confuse an adversary. The approach to obfuscating the design involves control-flow modifications which alter the computations from the desired mode. We present simulation and synthesis results on a reconfigurable, 2-parallel FFT and discuss the security of this approach. It is shown that the proposed approach results in a reconfigurable and flexible design at an area overhead of 8% and a power overhead of 10%.


Journal ArticleDOI
TL;DR: In this article, a rotated head array (RHA) was investigated to detect three tracks with 1-D and joint pattern-dependent noise-predictive (PDNP) Bahl-Cocke-Jelinek-Raviv (BCJR) detectors.
Abstract: Two-dimensional magnetic recording is a promising candidate to further extend the areal density above 1 Tb/in2 density while using a conventional writer and media. During the writing process, a shingled writer is usually used to write narrow tracks by overlapping previous tracks, which brings severe intertrack interference (ITI), fewer grains per channel bit and corresponding lower signal-to-noise ratio (SNR). As a consequence, for the current shingled magnetic recording system, a normally oriented head array (NHA) is usually implemented to detect a single track by using 2-D signal processing to mitigate the ITI and media noise. Then, a rotated head array (RHA) has been found to effectively avoid the ITI and regain the lost down-track resolution using signal processing. Correspondingly, in this paper, the RHA is investigated to simultaneously detect three tracks with 1-D and joint pattern-dependent noise-predictive (PDNP) Bahl–Cocke–Jelinek–Raviv (BCJR) detectors. Simulation indicates that, for the perfect writing at the 6 nm Voronoi grains, if the 1-D PDNP BCJR detector is implemented, the RHA combined with a designed 2-D equalizer producing multiple equalized waveforms can provide 16% density gains compared with the NHA with a 2-D equalizer and 1-D target at the target bit error rate (BER) of $10 ^{\mathrm {-2}}$ . If the joint PDNP BCJR detector is implemented, the RHA can provide 25% density gain compared with that for the NHA with the same detection algorithm at the target BER of $10 ^{\mathrm {-2}}$ . With respect to error correction, a longer codeword length of binary low density parity check code can be used for decoding of the multi-track detection compared with that for the single-track detection, which provides an extra SNR gain.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: The major advantages of the canonic RFFTs are that they require the least butterfly operations, lead to more regular sub-blocks in the data-flow, and only involve real datapath when mapped to architectures.
Abstract: This paper presents a novel algorithm to compute real-valued fast Fourier transform (RFFT) that is canonic with respect to the number of signal values. A signal value can correspond to a purely real or purely imaginary value, while a complex signal consists of 2 signal values. For an N-point RFFT, each stage need not compute more than N signal values, since the degrees of freedom of the input data is N. Any more than N signal values computed at any stage is inherently redundant. In order to reduce the redundant samples, a sample removal lemma, and two types of twiddle factor transformations are proposed: pushing and modulation. We consider 4 different cases. Canonic RFFT for any composite length can be computed by applying the proposed algorithm recursively. Performances of different RFFTs are also discussed in this paper. The major advantages of the canonic RFFTs are that they require the least butterfly operations, lead to more regular sub-blocks in the data-flow, and only involve real datapath when mapped to architectures.

Patent
13 May 2016
TL;DR: In this article, a finite state machine and a physical structure capable of providing a response to a challenge is presented, such that before the physical structure is ever provided with the challenge, the response to the challenge is unpredictable.
Abstract: An apparatus includes a finite state machine and a physical structure capable of providing a response to a challenge, the physical structure such that before the physical structure is ever provided with the challenge, the response to the challenge is unpredictable. The finite state machine moves from an initial state to an intermediate state due to receiving the response from the physical structure, and moves from the intermediate state to a final state due to receiving a key. The final state indicates whether the physical structure is a counterfeit physical structure.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: This paper demonstrates that, with a new encoding, CRNs can compute any set of polynomial functions subject only to the limitation that these functions must map the unit interval to itself.
Abstract: Chemical reaction networks (CRNs) provide a fundamental model in the study of molecular systems. Widely used as formalism for the analysis of chemical and biochemical systems, CRNs have received renewed attention as a model for molecular computation. This paper demonstrates that, with a new encoding, CRNs can compute any set of polynomial functions subject only to the limitation that these functions must map the unit interval to itself. These polynomials can be expressed as linear combinations of Bernstein basis polynomials with positive coefficients less than or equal to 1. In the proposed encoding approach, each variable is represented using two molecular types: a type-0 and a type-1. The value is the ratio of the concentration of type-1 molecules to the sum of the concentrations of type-0 and type-1 molecules. The proposed encoding naturally exploits the expansion of a power-form polynomial into a Bernstein polynomials. The method is illustrated first for generic CRNs; then the chemical reactions designed for two examples are mapped to DNA strand-displacement reactions.