scispace - formally typeset
M

Minhao Yang

Researcher at Columbia University

Publications -  30
Citations -  1476

Minhao Yang is an academic researcher from Columbia University. The author has contributed to research in topics: Pixel & Feature extraction. The author has an hindex of 14, co-authored 29 publications receiving 935 citations. Previous affiliations of Minhao Yang include University of Zurich & Peking University.

Papers
More filters
Journal ArticleDOI

A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor

TL;DR: This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently.
Proceedings Article

A 240×180 10mW 12us latency sparse-output vision sensor for mobile applications

TL;DR: A 0.18um CMOS vision sensor that combines event-driven asynchronous readout of temporal contrast with synchronous frame-based active pixel sensor read out of intensity is proposed, suitable for mobile applications because it allows low latency at low data rate and therefore, low system-level power consumption.
Journal ArticleDOI

A Dynamic Vision Sensor With 1% Temporal Contrast Sensitivity and In-Pixel Asynchronous Delta Modulator for Event Encoding

TL;DR: Improvements to a dynamic vision sensor with improved TC sensitivity and event encoding can facilitate the application of DVSs in areas like optical neuroimaging which is demonstrated in a simulated experiment.
Journal ArticleDOI

A 0.5 V 55 $\mu \text{W}$ 64 $\times $ 2 Channel Binaural Silicon Cochlea for Event-Driven Stereo-Audio Sensing

TL;DR: A 0.5V 55μW 64×2-channel binaural silicon cochlea aiming for ultra-low-power IoE applications like event-driven VAD, sound source localization, speaker identification and primitive speech recognition is presented.
Journal ArticleDOI

Design of an Always-On Deep Neural Network-Based 1- $\mu$ W Voice Activity Detector Aided With a Customized Software Model for Analog Feature Extraction

TL;DR: This paper presents an ultra-low-power voice activity detector (VAD) that uses analog signal processing for acoustic feature extraction (AFE) directly on the microphone output, approximate event-driven analog-to-digital conversion (ED-ADC), and digital deep neural network (DNN) for speech/non-speech classification.