scispace - formally typeset
A

Amir Hertz

Researcher at Tel Aviv University

Publications -  22
Citations -  1277

Amir Hertz is an academic researcher from Tel Aviv University. The author has contributed to research in topics: Computer science & Polygon mesh. The author has an hindex of 5, co-authored 13 publications receiving 296 citations. Previous affiliations of Amir Hertz include Microsoft.

Papers
More filters
Journal ArticleDOI

MeshCNN: a network with an edge

TL;DR: This paper utilizes the unique properties of the mesh for a direct analysis of 3D shapes using MeshCNN, a convolutional neural network designed specifically for triangular meshes, and demonstrates the effectiveness of MeshCNN on various learning tasks applied to 3D meshes.
Proceedings ArticleDOI

Prompt-to-Prompt Image Editing with Cross Attention Control

TL;DR: This paper examines a text-conditioned model in depth and observes that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt, and presents several applications which monitor the image synthesis by editing the textual prompt only.
Journal ArticleDOI

MeshCNN: A Network with an Edge

TL;DR: MeshCNN as discussed by the authors combines specialized convolution and pooling layers that operate on the mesh edges, by leveraging their intrinsic geodesic connections, and learns which edges to collapse, thus forming a task-driven process where the network exposes and expands the important features while discarding the redundant ones.
Journal ArticleDOI

Null-text Inversion for Editing Real Images using Guided Diffusion Models

TL;DR: This paper proposed a NULL-text inversion based on the Stable Diffusion model for text-guided image generation, which allows for applying prompt-based editing while avoiding the cumbersome tuning of the model's weights.
Proceedings ArticleDOI

MotionCLIP: Exposing Human Motion Generation to CLIP Space

TL;DR: It is shown that although CLIP has never seen the motion domain, MotionCLIP offers unprecedented text-to-motion abilities, allowing out-of-domain actions, disentangled editing, and abstract language specification.