scispace - formally typeset
Y

Yaniv Azar

Researcher at New York University

Publications -  11
Citations -  8616

Yaniv Azar is an academic researcher from New York University. The author has contributed to research in topics: Encoder & Pixel. The author has an hindex of 6, co-authored 11 publications receiving 6690 citations.

Papers
More filters
Journal ArticleDOI

Millimeter Wave Mobile Communications for 5G Cellular: It Will Work!

TL;DR: The motivation for new mm-wave cellular systems, methodology, and hardware for measurements are presented and a variety of measurement results are offered that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.
Posted Content

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

TL;DR: This work presents a generic image-to-image translation framework, pixel2style2pixel (pSp), based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming the extended latent space.
Proceedings ArticleDOI

28 GHz millimeter wave cellular communication measurements for reflection and penetration loss in and around buildings in New York city

TL;DR: Reflection coefficients and penetration losses for common building materials at 28 GHz show that outdoor building materials are excellent reflectors with the largest measured reflection coefficient of 0.896 for tinted glass as compared to indoor building materials that are less reflective.
Proceedings ArticleDOI

28 GHz propagation measurements for outdoor cellular communications using steerable beam antennas in New York city

TL;DR: The world's first empirical measurements for 28 GHz outdoor cellular propagation in New York City are presented, suggesting that millimeter wave mobile communication systems with electrically steerable antennas could exploit resolvable multipath components to create viable links for cell sizes on the order of 200 m.
Proceedings ArticleDOI

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

TL;DR: The pixel2style2pixel (pSp) as discussed by the authors framework is based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming the extended $\mathcal{W} + $ latent space.