A CMOS feedforward neural-network chip with on-chip parallel learning for oscillation cancellation
TL;DR: A mixed signal CMOS feedforward neural-network chip with on-chip error-reduction hardware for real-time adaptation and a genetic random search algorithm: the RWC algorithm, suitable for direct feedback control.
Abstract: The paper presents a mixed signal CMOS feedforward neural-network chip with on-chip error-reduction hardware for real-time adaptation. The chip has compact on-chip weighs capable of high-speed parallel learning; the implemented learning algorithm is a genetic random search algorithm: the random weight change (RWC) algorithm. The algorithm does not require a known desired neural network output for error calculation and is suitable for direct feedback control. With hardware experiments, we demonstrate that the RWC chip, as a direct feedback controller, successfully suppresses unstable oscillations modeling combustion engine instability in real time.
Summary (3 min read)
- When the networks become larger, the software simulation time increases accordingly.
- Hardware-friendly algorithms are essential to ensure the functionality and cost effectiveness of the hardware implementation.
- Neural networks can be implemented with software, digital hardware, or analog hardware .
A. Learning Algorithm
- The learning algorithms are associated with the specific neural-network architectures.
- This work focuses on the widely used layered feedforward neural-network architecture.
- If the error is increased where is either or with equal probability, is a small quantity that sets the learning rate, and and are the weight and weight change ofth synapse at the th iteration.
- Previously, it has been shown with simulations that a modified RWC algorithm can identify and control an inductor motor .
- Further simulation-based research has shown that the RWC algorithm is immune to analog circuit nonidealities .
B. Synapse Circuits
- Usually, the capacitors have to be designed large enough (around 20 pF for room temperature decay in seconds) to prevent unwanted weight value decay.
- In addition, the added A/D and D/A converters either make the chip large or result in slow serial operation.
- EEPROM weights are compact nonvolatile memories (permanent storage), but they are process sensitive and hard to program.
- Thus, the chip described here uses capacitor as weight storage.
C. Activation Function Circuits
- Research ,  shows that the nonlinearity used in neural-network activation functions can be replaced by multiplier nonlinearity.
- Since the weight multiplication circuit has nonlinearity, the authors uses a linear current to voltage converter with saturation to implement the activation function.
A. Chip Architecture
- The chip was fabricated through MOSIS in Orbit 2-m n-well process.
- It contains 100 weights in a 1010 array and has ten inputs and ten outputs.
- The input pads are located at the right side of the chip, and the output pads are located at the bottom side of the chip.
- This arrangement makes it possible for the chip to be cascaded into multilayer networks.
- At a given time, each cell sees a random number at the output of the shift register, being either “1” or “0.”.
B. Weight Storage andAdaptation Circuits
- The weight charge is stored in the larger capacitor, with representing the weight value.
- Suppose that the voltage across is and the voltage across is before connecting them in parallel, after connecting them in parallel for charge sharing, the final voltages across them are the same, supposed to be .
- ClocksPh3 andperm have complimentary phases with period of 2 ms.
- The weight increment and decrement rates are determined by the values of , andLastcap, as mentioned earlier.
C. Multiplier Circuits
- The voltage is the substrate voltage, which is the most negative voltage among all the biasing voltages.
- Fig. 9 shows a 100-ms time slice of the continuously adjusting process; the desired high and low output voltages are 1.5 and 0.5 V for this case.
- According to the error calculation equation, the error decreases.
- As this process goes on, the network dynamically maintains its performance as an inverter by continuously adjusting its weights.
- The active controllers that developed to suppress the oscillation in fixed modes cannot deal with the unpredicted new oscillation modes.
A. Combustion Model With Continuously Changing Parameters
- The combustion process is modeled by the limit cycle model: , where Fig. 11. is the input to the engine, is the output, is the oscillation frequency, is the damping factor, andis limit cycle constant.
- The combustion engine output is tap delayed; the original engine output and the tap-delayed signals are the inputs of the neural-network controller.
- Fig. 11 shows the simulation result when the parameter change rate is 1 point/s.
- In the figure, the horizontal axis is time.
B. Noise Tolerance
- This section presents the simulation result of the neural-network control of the simulated limit cycle single frequency combustion instability with 10% random noise.
- The purpose of these simulations is to determine the noise tolerance of the system.
- The 10% noise is a fair estimation of the real combustion system.
- The plant parameters were as follows: the frequency was 400 Hz, the damping factor was 0.005, and the limit cycle constant was one.
- The bottom plot shows the engine output with the control of the neural network; only the additive noise remains in the engine output after about 5 s. Fig. 13. Experimental test setup.
A. Test Setup
- The hardware test setup is shown in Fig. 13.
- The analog-to-digital converter (ADC) and the digital-to-analog converter (DAC) cards provide interface between the oscillating process and the hardware chip.
- The neural-network chip itself requires several interface pins like power supply, two nonoverlapping clocks to synchronize the learning process, a random bit, and one-bit error increase or decrease signal.
- The weights of the chip are updated every time an error is calculated.
- Within these two cycles, the combustion simulation process keeps on generating outputs, and these outputs are forward propagated through the chip and fed back to the engine input.
B. Stable Oscillations
- The combustion process is simulated using the limit cycle model.
- The frequency is 400 Hz and the damping factor is zero.
- The error signal is specified to be proportional to the magnitude of the oscillation and is calculated by low-passing the rectified engine output oscillation signal.
- Fig. 15 shows the details of the learning process.
- At the beginning of the learning process, the chip explores weight changes in different directions, the error-decrement signal oscillates and the output magnitude increases slowly.
C. Unstable Oscillations
- The above experimental test shows the chip suppressing a stable oscillation, in this section, the authors present the experimental result of the chip suppressing unstable oscillations.
- The chip suppresses the oscillation within around 1 s and limits the magnitude to be within 0.3.
- There are two big reoccurred blowups with magnitude of 1.8 and 0.7, after which the engine output is limited to the magnitude to be within 0.5.
- In addition, the I/O card delay and the software simulation Fig. 18.
D. Controller Output
- The control signal generated by the neural-network controller is presented and analyzed.
- J. A. Lansner and T. Lehmann, “An analog CMOS chip set for neural network with arbitrary topologies,”IEEE Trans.
- Dr. Brooke won a National Science Foundation Research Initiation Award in 1990, and the 1992 IEEE Midwest Symposium on Circuits and Systems, Myril B. Reed Best Paper Award.
Did you find this useful? Give us your feedback
Cites methods from "A CMOS feedforward neural-network c..."
...…developed as software, there are specific applications such as streaming video compression, which demand high volume adaptive real-time processing and learning of large datasets in reasonable time and necessitate the use of energy-efficient ANN hardware with truly parallel processing…...
Cites background from "A CMOS feedforward neural-network c..."
...Though RWC algorithm itself is simple to implement ,  an efficient circuit for weight implementation was not available....
"A CMOS feedforward neural-network c..." refers methods in this paper
...When the networks become larger, the software simulation time increases accordingly....
"A CMOS feedforward neural-network c..." refers background in this paper
...This work focuses on the widely used layered feedforward neural-network architecture....
Related Papers (5)
Frequently Asked Questions (2)
Q1. What have the authors contributed in "A cmos feedforward neural-network chip with on-chip parallel learning for oscillation cancellation" ?
This paper presents a mixed signal CMOS feedforward neural-network chip with on-chip error-reduction hardware for real-time adaptation. With hardware experiments, the authors demonstrate that the RWC chip, as a direct feedback controller, successfully suppresses unstable oscillations modeling combustion engine instability in real time.
Q2. What are the future works in "A cmos feedforward neural-network chip with on-chip parallel learning for oscillation cancellation" ?
Future work includes adjustable learning step size proportional to the error signal and nonvolatile weight storage.