scispace - formally typeset
T

Tianjian Li

Researcher at Shanghai Jiao Tong University

Publications -  17
Citations -  91

Tianjian Li is an academic researcher from Shanghai Jiao Tong University. The author has contributed to research in topics: Artificial neural network & Test method. The author has an hindex of 4, co-authored 17 publications receiving 53 citations.

Papers
More filters
Proceedings ArticleDOI

Sneak-Path Based Test and Diagnosis for 1R RRAM Crossbar Using Voltage Bias Technique

TL;DR: Voltage bias is used to manipulate various distribution of sneak-paths that can screen one or multiple faults out of a 4 × 4 region of memristors at once, and consequently diagnose the exact location of each faulty memristor within three write-read operations.
Journal ArticleDOI

ITT-RNA: Imperfection Tolerable Training for RRAM-Crossbar-Based Deep Neural-Network Accelerator

TL;DR: An accelerator-friendly neural-network training method, by leveraging the inherent self-healing capability of the neural network, to prevent the large-weight synapses from being mapped to the imperfect memristor cells, and a dynamic adjustment mechanism to extend the above method for DNNs, such as multilayer perceptrons (MLPs).
Journal ArticleDOI

A Novel Test Method for Metallic CNTs in CNFET-Based SRAMs

TL;DR: This paper proposes a novel low-cost test solution to detect m-CNT-induced SRAM faults, and proposes three jump test algorithms for different CNFET-SRAM layouts to ensure high fault coverage.
Proceedings ArticleDOI

Container-code recognition system based on computer vision and deep neural networks

TL;DR: An automatic container-code recognition system based on computer vision and deep neural networks is proposed, which is able to deal with more situations, and generates a better detection result through combination to avoid the drawbacks of the two methods.
Proceedings ArticleDOI

HUBPA: high utilization bidirectional pipeline architecture for neuromorphic computing

TL;DR: This paper proposes a novel ReRAM-based bidirectional pipeline architecture, named HUBPA, to accelerate the training of Convolutional Neural Networks with higher utilization of the computing resource, and designs an accessory control scheme for the context switch of these two tasks.