scispace - formally typeset
Search or ask a question
Author

Takashi Morie

Bio: Takashi Morie is an academic researcher from Kyushu Institute of Technology. The author has contributed to research in topics: CMOS & Very-large-scale integration. The author has an hindex of 20, co-authored 206 publications receiving 1978 citations. Previous affiliations of Takashi Morie include Hiroshima University & Panasonic.


Papers
More filters
Journal ArticleDOI
TL;DR: Spike-timing-dependent synaptic plasticity (STDP) is demonstrated in a synapse device based on a ferroelectric-gate field-effect transistor (FeFET).
Abstract: Spike-timing-dependent synaptic plasticity (STDP) is demonstrated in a synapse device based on a ferroelectric-gate field-effect transistor (FeFET). STDP is a key of the learning functions observed in human brains, where the synaptic weight changes only depending on the spike timing of the pre- and post-neurons. The FeFET is composed of the stacked oxide materials with ZnO/Pr(Zr,Ti)O3 (PZT)/SrRuO3. In the FeFET, the channel conductance can be altered depending on the density of electrons induced by the polarization of PZT film, which can be controlled by applying the gate voltage in a non-volatile manner. Applying a pulse gate voltage enables the multi-valued modulation of the conductance, which is expected to be caused by a change in PZT polarization. This variation depends on the height and the duration of the pulse gate voltage. Utilizing these characteristics, symmetric and asymmetric STDP learning functions are successfully implemented in the FeFET-based synapse device by applying the non-linear puls...

164 citations

Journal ArticleDOI
TL;DR: An analog neural system made by combining LSI's with feedback connections is promising for implementing continuous-time models of recurrent networks with real-time learning.
Abstract: This paper proposes an all-analog neural network LSI architecture and a new learning procedure called contrastive backpropagation learning In analog neural LSI's with on-chip backpropagation learning, inevitable offset errors that arise in the learning circuits seriously degrade the learning performance Using the learning procedure proposed here, offset errors are canceled to a large extent and the effect of offset errors on the learning performance is minimized This paper also describes a prototype LSI with 9 neurons and 81 synapses based on the proposed architecture which is capable of continuous neuron-state and continuous-time operation because of its fully analog and fully parallel property Therefore, an analog neural system made by combining LSI's with feedback connections is promising for implementing continuous-time models of recurrent networks with real-time learning >

116 citations

Journal ArticleDOI
01 Jul 1999
TL;DR: This architecture is highly suitable for implementation with deep sub-/spl mu/m CMOS devices, which can attain improved switching speeds and reduce power dissipation during low voltage operation and a low voltage system-on-a-chip solution for multimedia applications.
Abstract: This brief proposes a new- architecture for the oversampling delta-sigma analog-to-digital converter (ADC) utilizing a voltage controlled oscillator (VCO). The VCO, associated with a pulse counter, works as a high-speed quantizer. This VCO quantizer also has the function of first-order noise shaping because the phase of the output pulse is an integrated quantity of the input voltage. If the maximum VCO frequency (fvm) is designed in the range of (2/sup bq/-2)fos

116 citations

Journal ArticleDOI
Makoto Nagata1, J. Nagai2, K. Hijikata, Takashi Morie1, Atsushi Iwata1 
TL;DR: In this paper, a time-series divided parasitic capacitance model is derived as an efficient estimator of the supply current for simulating the substrate noise injection and can reproduce the measured substrate noise waveforms.
Abstract: Substrate noise injection in large-scale CMOS logic integrated circuits is quantitatively evaluated by 100-/spl mu/V 100-ps resolution substrate noise measurements of controlled substrate noises by a transition-controllable noise source and practical substrate noises under CMOS logic operations. The noise injection is dominated by leaks of supply/return bounce into the substrate, and the noise intensity is determined by logic transition activity, according to experimental observations. A time-series divided parasitic capacitance model is derived as an efficient estimator of the supply current for simulating the substrate noise injection and can reproduce the measured substrate noise waveforms. The efficacy of physical noise reduction techniques at the layout and circuit levels is quantified and limitations are discussed in conjunction with the noise injection mechanisms. The reduced supply bounce CMOS circuit is proposed as a universal noise reduction technique, and more than 90% noise reduction to conventional CMOS is demonstrated.

92 citations

Journal ArticleDOI
TL;DR: A transition-controllable noise source is developed in a 0.1-/spl mu/m P-substrate N-well CMOS technology that can generate substrate noises with controlled transitions in size, interstage delay and direction for experimental studies on substrate noise properties in a mixed-signal integrated circuit environment.
Abstract: A transition-controllable noise source is developed in a 0.1-/spl mu/m P-substrate N-well CMOS technology. This noise source can generate substrate noises with controlled transitions in size, interstage delay and direction for experimental studies on substrate noise properties in a mixed-signal integrated circuit environment. Substrate noise measurements of 100 ps, 100-/spl mu/s resolution are performed by indirect sensing that uses the threshold voltage shift in a latch comparator and by direct probing that uses a PMOS source follower. Measured waveforms indicate that peaks reflecting logic transition frequencies have a time constant that is more than ten times larger than the switching time. Analyses with equivalent circuits confirm that charge transfer between the entire parasitic capacitance in digital circuits and an external supply through parasitic impedance to supply/return paths dominates the process, and the resultant return bounce appears as the substrate noise.

77 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors describe the rules of the ring, the ring population, and the need to get off the ring in order to measure the movement of a cyclic clock.
Abstract: 1980 Preface * 1999 Preface * 1999 Acknowledgements * Introduction * 1 Circular Logic * 2 Phase Singularities (Screwy Results of Circular Logic) * 3 The Rules of the Ring * 4 Ring Populations * 5 Getting Off the Ring * 6 Attracting Cycles and Isochrons * 7 Measuring the Trajectories of a Circadian Clock * 8 Populations of Attractor Cycle Oscillators * 9 Excitable Kinetics and Excitable Media * 10 The Varieties of Phaseless Experience: In Which the Geometrical Orderliness of Rhythmic Organization Breaks Down in Diverse Ways * 11 The Firefly Machine 12 Energy Metabolism in Cells * 13 The Malonic Acid Reagent ('Sodium Geometrate') * 14 Electrical Rhythmicity and Excitability in Cell Membranes * 15 The Aggregation of Slime Mold Amoebae * 16 Numerical Organizing Centers * 17 Electrical Singular Filaments in the Heart Wall * 18 Pattern Formation in the Fungi * 19 Circadian Rhythms in General * 20 The Circadian Clocks of Insect Eclosion * 21 The Flower of Kalanchoe * 22 The Cell Mitotic Cycle * 23 The Female Cycle * References * Index of Names * Index of Subjects

3,424 citations

Proceedings ArticleDOI
03 Aug 2010
TL;DR: New unsupervised learning algorithms, and new non-linear stages that allow ConvNets to be trained with very few labeled samples are described, including one for visual object recognition and vision navigation for off-road mobile robots.
Abstract: Intelligent tasks, such as visual perception, auditory perception, and language understanding require the construction of good internal representations of the world (or "features")? which must be invariant to irrelevant variations of the input while, preserving relevant information. A major question for Machine Learning is how to learn such good features automatically. Convolutional Networks (ConvNets) are a biologically-inspired trainable architecture that can learn invariant features. Each stage in a ConvNets is composed of a filter bank, some nonlinearities, and feature pooling layers. With multiple stages, a ConvNet can learn multi-level hierarchies of features. While ConvNets have been successfully deployed in many commercial applications from OCR to video surveillance, they require large amounts of labeled training samples. We describe new unsupervised learning algorithms, and new non-linear stages that allow ConvNets to be trained with very few labeled samples. Applications to visual object recognition and vision navigation for off-road mobile robots are described.

1,927 citations

Journal ArticleDOI
TL;DR: A new nanoscale electronic synapse based on technologically mature phase change materials employed in optical data storage and nonvolatile memory applications is reported, utilizing continuous resistance transitions in phase change material to mimic the analog nature of biological synapses, enabling the implementation of a synaptic learning rule.
Abstract: Brain-inspired computing is an emerging field, which aims to extend the capabilities of information technology beyond digital logic. A compact nanoscale device, emulating biological synapses, is needed as the building block for brain-like computational systems. Here, we report a new nanoscale electronic synapse based on technologically mature phase change materials employed in optical data storage and nonvolatile memory applications. We utilize continuous resistance transitions in phase change materials to mimic the analog nature of biological synapses, enabling the implementation of a synaptic learning rule. We demonstrate different forms of spike-timing-dependent plasticity using the same nanoscale synapse with picojoule level energy consumption.

1,098 citations

Journal ArticleDOI
02 Jan 2017
TL;DR: The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.
Abstract: Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first revie...

800 citations

Journal ArticleDOI
06 Jun 2018-Nature
TL;DR: Mixed hardware–software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with ‘polarity inversion’ to cancel out inherent device-to-device variations are demonstrated.
Abstract: Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply-accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware-software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with 'polarity inversion' to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today's graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers.

693 citations