Author
B. X. Chang
Bio: B. X. Chang is an academic researcher. The author has contributed to research in topics: Ocean gyre. The author has an hindex of 1, co-authored 1 publications receiving 32 citations.
Topics: Ocean gyre
Papers
More filters
01 Jan 2014
TL;DR: Any remaining signal attributable to N2 fixation would imply that the ecological niche of diazotrophs in the central gyre is uncoupled from the major N loss in the OMZ, and that a substantial imbalance of the Pacific N budget has persisted over the 20th century.
Abstract: ): – requirements throughout surface waters of the N-limited North Pacific. Recent isotopic analysis of skeleton material from deep-sea corals near Hawaii also exhibit a decreasing trend over this time period, which has been interpreted as a signal of increasing N inputs from N2 fixation (36). However, because isotopic and stoichiometric signals of denitrification are transported from the anoxic zone into the subtropical gyre (37), the reported coral trends may originate partly from the OMZ. Any remaining signal attributable to N2 fixation would imply that the ecological niche of diazotrophs in the central gyre is uncoupled from the major N loss in the OMZ (38), and that a substantial imbalance of the Pacific N budget has persisted over the 20th century.
40 citations
Cited by
More filters
••
TL;DR: NeuroSim, a circuit-level macro model that estimates the area, latency, dynamic energy, and leakage power to facilitate the design space exploration of neuro-inspired architectures with mainstream and emerging device technologies is developed.
Abstract: Neuro-inspired architectures based on synaptic memory arrays have been proposed for on-chip acceleration of weighted sum and weight update in machine/deep learning algorithms. In this paper, we developed NeuroSim, a circuit-level macro model that estimates the area, latency, dynamic energy, and leakage power to facilitate the design space exploration of neuro-inspired architectures with mainstream and emerging device technologies. NeuroSim provides flexible interface and a wide variety of design options at the circuit and device level. Therefore, NeuroSim can be used by neural networks (NNs) as a supporting tool to provide circuit-level performance evaluation. With NeuroSim, an integrated framework can be built with hierarchical organization from the device level (synaptic device properties) to the circuit level (array architectures) and then to the algorithm level (NN topology), enabling instruction-accurate evaluation on the learning accuracy as well as the circuit-level performance metrics at the run-time of online learning. Using multilayer perceptron as a case-study algorithm, we investigated the impact of the “analog” emerging nonvolatile memory (eNVM)’s “nonideal” device properties and benchmarked the tradeoffs between SRAM, digital, and analog eNVM-based architectures for online learning and offline classification.
343 citations
••
TL;DR: A comprehensive analysis of the error evolution in the system reveals that the electrical/optical conversions dominate the error contribution, which suggests that an all optical approach is preferable for future neuromorphic computing hardware design.
Abstract: Photonic neuromorphic computing is raising a growing interest as it promises to provide massive parallelism and low power consumption. In this paper, we demonstrate for the first time a feed-forward neural network via an 8 × 8 Indium Phosphide cross-connect chip, where up to 8 on-chip weighted addition circuits are co-integrated, based on semiconductor optical amplifier technology. We perform the weight calibration per neuron, resulting in a normalized root mean square error smaller than 0.08 and a best case dynamic range of 27 dB. The 4 input to 1 output weighted addition operation is executed on-chip and is part of a neuron, whose non-linear function is implemented via software. A three feedback loop optimization procedure is demonstrated to enable an output neuron accuracy improvement of up to 55%. The exploitation of this technology as neural network is evaluated by implementing a trained 3-layer photonic deep neural network to solve the Iris flower classification problem. Prediction accuracy of 85.8% is achieved, with respect to the 95% accuracy obtained via a computer. A comprehensive analysis of the error evolution in our system reveals that the electrical/optical conversions dominate the error contribution, which suggests that an all optical approach is preferable for future neuromorphic computing hardware design.
83 citations
••
TL;DR: An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work, and the importance of considering not only neurons, but glial cells is pointed out, given the proven importance of astrocytes.
Abstract: Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.
76 citations
••
TL;DR: It is identified that bias-polarity-dependent digital switching in HfO2 RRAM is primarily related to the creation and rupture of an oxide barrier, and the modulation of the CF size in Ta2O5 RRAM allows bias- polarities-independent analog switching with multiple states.
Abstract: We perform a comparative study of HfO2 and Ta2O5 resistive switching memory (RRAM) devices for their possible application as electronic synapses. By means of electrical characterization and simulations, we link their electrical behavior (digital or analog switching) to the properties and evolution of the conductive filament (CF). More specifically, we identify that bias-polarity-dependent digital switching in HfO2 RRAM is primarily related to the creation and rupture of an oxide barrier. Conversely, the modulation of the CF size in Ta2O5 RRAM allows bias-polarity-independent analog switching with multiple states. Therefore, when the Ta2O5 RRAM is used to implement a synapse in multilayer perceptron neural networks operated by back-propagation algorithms, patterns in handwritten digits can be recognized with high accuracy.
75 citations
••
24 Jul 2016TL;DR: This work describes an EO training framework for a spiking neural network architecture and a neuromorphic architecture, and presents the results of this training framework on four classification data sets and compares those results to other neural network and neuromorphic implementations.
Abstract: As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.
70 citations