scispace - formally typeset
Search or ask a question
Author

Mohammed Ismail

Other affiliations: Khalifa University, Ohio State University, Delco Electronics  ...read more
Bio: Mohammed Ismail is an academic researcher from Wayne State University. The author has contributed to research in topics: CMOS & Operational amplifier. The author has an hindex of 43, co-authored 557 publications receiving 7964 citations. Previous affiliations of Mohammed Ismail include Khalifa University & Ohio State University.


Papers
More filters
Book
13 Oct 2011
TL;DR: A Neural Processor for Maze Solving and Issues in Analog VLSI and MOS Techniques for Neural Computing are discussed.
Abstract: 1. A Neural Processor for Maze Solving.- 2 Resistive Fuses: Analog Hardware for Detecting Discontinuities in Early Vision.- 3 CMOS Integration of Herault-Jutten Cells for Separation of Sources.- 4 Circuit Models of Sensory Transduction in the Cochlea.- 5 Issues in Analog VLSI and MOS Techniques for Neural Computing.- 6 Design and Fabrication of VLSI Components for a General Purpose Analog Neural Computer.- 7 A Chip that Focuses an Image on Itself.- 8 A Foveated Retina-Like Sensor Using CCD Technology.- 9 Cooperative Stereo Matching Using Static and Dynamic Image Features.- 10 Adaptive Retina.

359 citations

Journal ArticleDOI
TL;DR: In-depth analysis of TEGs is presented, starting by an extensive description of their working principle, types, used materials, figure of merit, improvement techniques including different thermoelectric materials arrangement (conventional, segmented and cascaded), and used technologies and substrates types (silicon, ceramics and polymers).

352 citations

Journal ArticleDOI
TL;DR: A generalized parameter-level statistical model, called statistical MOS (SMOS), capable of generating statistically significant model decks from intra- and inter-die parameter statistics is described, and Calculated model decks preserve the inherent correlations between model parameters while accounting for the dependence of parameter variance on device separation distance and device area.
Abstract: A generalized parameter-level statistical model, called statistical MOS (SMOS), capable of generating statistically significant model decks from intra- and inter-die parameter statistics is described. Calculated model decks preserve the inherent correlations between model parameters while accounting for the dependence of parameter variance on device separation distance and device area. Using a Monte Carlo approach to parameter sampling, circuit output means and standard deviations can be simulated. Incorporated in a CAD environment, these modeling algorithms will provide the analog circuit designer with a method to determine the effect of both circuit layout and device sizing on circuit output variance. Test chips have been fabricated from two different fabrication processes to extract statistical information required by the model. Experimental and simulation results for two analog subcircuits are compared to verify the statistical modeling algorithms. >

191 citations

Journal ArticleDOI
TL;DR: In this paper, a second-order RF bandpass filter based on active inductor has been implemented in a 0.35 /spl mu/m CMOS process, which has 28dB spurious-free-dynamic-range (SFDR) and total current consumption (including buffer stage) is 17 mA with 2.7-V power supply.
Abstract: In this paper, a second-order RF bandpass filter based on active inductor has been implemented in a 0.35 /spl mu/m CMOS process. Issues related to the intrinsic quality factor and dynamic range of the CMOS active inductor are addressed. Tuned at 900 MHz with Q=40, the filter has 28-dB spurious-free-dynamic-range (SFDR) and total current consumption (including buffer stage) is 17 mA with 2.7-V power supply. Experimental results also show the possibility of using them to build higher order RF filter and voltage-controlled oscillator (VCO).

163 citations

Journal ArticleDOI
TL;DR: This paper presents the design of a fully integrated electrocardiogram (ECG) signal processor (ESP) for the prediction of ventricular arrhythmia using a unique set of ECG features and a naive Bayes classifier.
Abstract: This paper presents the design of a fully integrated electrocardiogram (ECG) signal processor (ESP) for the prediction of ventricular arrhythmia using a unique set of ECG features and a naive Bayes classifier. Real-time and adaptive techniques for the detection and the delineation of the P-QRS-T waves were investigated to extract the fiducial points. Those techniques are robust to any variations in the ECG signal with high sensitivity and precision. Two databases of the heart signal recordings from the MIT PhysioNet and the American Heart Association were used as a validation set to evaluate the performance of the processor. Based on application-specified integrated circuit (ASIC) simulation results, the overall classification accuracy was found to be 86% on the out-of-sample validation data with 3-s window size. The architecture of the proposed ESP was implemented using 65-nm CMOS process. It occupied 0.112- ${\rm mm}^{2}$ area and consumed 2.78- $\mu \text{W}$ power at an operating frequency of 10 kHz and from an operating voltage of 1 V. It is worth mentioning that the proposed ESP is the first ASIC implementation of an ECG-based processor that is used for the prediction of ventricular arrhythmia up to 3 h before the onset.

128 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Book
02 Jan 1991

1,377 citations

01 Jan 1992
TL;DR: In this paper, a multilevel commutation cell is introduced for high-voltage power conversion, which can be applied to either choppers or voltage-source inverters and generalized to any number of switches.
Abstract: The authors discuss high-voltage power conversion. Conventional series connection and three-level voltage source inverter techniques are reviewed and compared. A novel versatile multilevel commutation cell is introduced: it is shown that this topology is safer and more simple to control, and delivers purer output waveforms. The authors show how this technique can be applied to either choppers or voltage-source inverters and generalized to any number of switches.<>

1,202 citations

Journal ArticleDOI
TL;DR: This paper presents an overview of nanopositioning technologies and devices emphasizing the key role of advanced control techniques in improving precision, accuracy, and speed of operation of these systems.
Abstract: Nanotechnology is the science of understanding matter and the control of matter at dimensions of 100 nm or less. Encompassing nanoscale science, engineering, and technology, nanotechnology involves imaging, measuring, modeling, and manipulation of matter at this level of precision. An important aspect of research in nanotechnology involves precision control and manipulation of devices and materials at a nanoscale, i.e., nanopositioning. Nanopositioners are precision mechatronic systems designed to move objects over a small range with a resolution down to a fraction of an atomic diameter. The desired attributes of a nanopositioner are extremely high resolution, accuracy, stability, and fast response. The key to successful nanopositioning is accurate position sensing and feedback control of the motion. This paper presents an overview of nanopositioning technologies and devices emphasizing the key role of advanced control techniques in improving precision, accuracy, and speed of operation of these systems.

1,027 citations